Discussion about this post

User's avatar
Brian Carter's avatar

Hi Jack!

Enjoyed your appearance on my favorite podcast, ODD LOTS!!!

I wanted to ask you about a couple things here-

My understanding is that AI is comprised of many things, including ML, DL, NLP, LLMs (I understand some of those are subsets of others).

LLMs clearly have caught the public’s imagination more than, let’s say, ML methods for early detection of breast cancer…

But my (admittedly much less educated) take is that all of ML and DL is important.

I also was disappointed the more I tested LLMs for things like novel idea generation (in my case as a comedian, I was trying to get it to help me write jokes, and as a digital marketer, to come up with marketing ideas).

Because LLMs act as a normalization machine based on the relationship between things as they are, creativity seems to be more difficult for them. They’re more likely to give you the same “creative” things, that their training told you were creative. Or tired old dad jokes.

I did a tiny bit of research toward discovering how relationships are scored, and if there were a way to do the inverse, which might get us closer to a bisociation type of creativity. Or discover what the 30, 35, or 90 degree angle from a relationship might be. I’ve actually prompted LLMs with polar ideas and asked them for 90-degree “left hand turn” ideas, with some success. But I know it’s not coming from a quantifed place.

I was excited about agents in the sense of multi-agent, LLM-coordinated tools that would also use python, RAG, whatever… and I am currently most impressed by perplexity’s eagerness to write and run python code to answer my questions.

In any case, that’s why I think the evolution of AI is limited by the nature of LLM, and perhaps transformers and matrices more specifically.

Where do you see all this going more at that level of detail?

Thanks!

Expand full comment
Paul Datta's avatar

All we have is language! Language as the OS for collective human ambition. Enjoyed reading your post, thanks.

Expand full comment
2 more comments...

No posts