Some recent readings

Interesting article about a classic pen-and-paper game: Ace of Aces: or, why you should Do Maths as a game designer

Activism is not a Social Club

“AI” Presets for Underwater Photography: Adaptive AI Preset Pack This is a good idea, especially if they really are doing the feasible thing, which is to do a monocular depth estimation of the photo and then applying “de-bluing” haze removal / energy removal. Probably good money in this for at least a few years, since underwater photography is probably too niche for Adobe to Sherlock them and incorporate directly into Photoshop / Lightroom.

Some neural net libraries from Apple. I’m tempted to do some simple networks in these to see what kind of speed I can get from the M-series chips. PyTorch supports mps as a backend, but in my experiments, it’s about 1/4 the speed of the same-generation CUDA on NVidia:

It’s been cloudy here but the next time we have blue skies I’m going to try to see white blood cells in my eyelids:

I tried to get Chat-GPT 4 and Claude to generate some equations in this style and they failed pretty miserably both at the “didn’t compile” level (LaTex being a little less common) and, more importantly, at the “group the logical components” level.

Copilot Workspace is a new thing from Github that works on repo-wide tasks. I hope to give it a try in the coming months:

Visual Blocks is a visual graph-design surface for neural vision tasks. Not sure how flexible it is, but I think that tools like this are very, very the right way forward:

Grafychat is a front-end for LLM chatbots. One of the things I’ve been finding lately is that prompt engineering (god, I hate to use that term) is very, very iterative and difficult to edit. This may be a good tool for iterative development:

There have been a number of folks doing really nice “from scratch” explorations of LLMs. I liked this one: