4 Comments
User's avatar
Brian Carter's avatar

Hi Jack!

Enjoyed your appearance on my favorite podcast, ODD LOTS!!!

I wanted to ask you about a couple things here-

My understanding is that AI is comprised of many things, including ML, DL, NLP, LLMs (I understand some of those are subsets of others).

LLMs clearly have caught the public’s imagination more than, let’s say, ML methods for early detection of breast cancer…

But my (admittedly much less educated) take is that all of ML and DL is important.

I also was disappointed the more I tested LLMs for things like novel idea generation (in my case as a comedian, I was trying to get it to help me write jokes, and as a digital marketer, to come up with marketing ideas).

Because LLMs act as a normalization machine based on the relationship between things as they are, creativity seems to be more difficult for them. They’re more likely to give you the same “creative” things, that their training told you were creative. Or tired old dad jokes.

I did a tiny bit of research toward discovering how relationships are scored, and if there were a way to do the inverse, which might get us closer to a bisociation type of creativity. Or discover what the 30, 35, or 90 degree angle from a relationship might be. I’ve actually prompted LLMs with polar ideas and asked them for 90-degree “left hand turn” ideas, with some success. But I know it’s not coming from a quantifed place.

I was excited about agents in the sense of multi-agent, LLM-coordinated tools that would also use python, RAG, whatever… and I am currently most impressed by perplexity’s eagerness to write and run python code to answer my questions.

In any case, that’s why I think the evolution of AI is limited by the nature of LLM, and perhaps transformers and matrices more specifically.

Where do you see all this going more at that level of detail?

Thanks!

Expand full comment
Paul Datta's avatar

All we have is language! Language as the OS for collective human ambition. Enjoyed reading your post, thanks.

Expand full comment
teknosaur's avatar

what about Tesla FSD, DeepMind Alpha Fold. these specialised AI have already crossed human level and continue to get better. they are different from LLMs, so we have a bunch of candidates right now

Expand full comment
Tem Noon's avatar

Specialized systems are not "human level intelligence" and are no more "human" or "intelligent" than a scientific calculator or abacus for that matter. Either way, it requires a human to use the output for something useful. Just because we don't know how minds work, and we do know how calculators work doesn't mean that we can call the mechanism "intelligent" because we can't produce the same correct output in our heads.

I'd argue that just because we have language describing a metric for intelligence calibrated by and tested by language doesn't mean that LLMs can actually be "human level" in anything useful. They are very good at putting words together, but there's nothing really "human" about what they are doing. The problem is that science is culturally oriented around objective metrics, and the social power of narrative has blinded us to the subjective container each human must recreate for themselves in their own minds to evoke what they think the objective LLM is saying.

Expand full comment