DenAI Conference II: bits from Eric Schmidt lunch discussion

Just a touch more from the recent DenAI Summit, made a bit more fresh by some recent material from OpenAI. At the summit, Eric Schmidt was the big name lunch speaker. Denver Mayor Mike Johnson did the interviewing.

Schmidt focused a lot of his attention on how AI is affecting science (quantum computing, climate science, and gene/protein research). At the beginning of his session, though, the Mayor asked him to list three big things that are shaking up AI in the near term, and later asked about one big worry. I thought the answers were worth comment.

Infinite Context Window

The context window is essentially the ‘short-term memory’ of a language model. Making it infinitely long should allow for indefinite conversations, where the model remembers your entire conversation back days, weeks, months, etc. You could also dump documents of immense length into a model and ask about it.

BUT, having an infinite short-term memory does not necessarily mean that the model is paying attention to everything in that memory. Models are known to pay more attention to items at the beginning or end of their context window. It will be interesting to see how the architecture responds to longer and longer windows or if, like humans, they develop large memory blind spots.

The Rise of Agents

Agents are one of the most exciting developments this year. We are at the beginning of their use, but I expect them to play a big part in many AI advances next year. An Agent is basically a piece of software that uses an LLM to make decisions autonomously. A super-simple example might be letting the software decide whether incoming emails are important or not, and route them accordingly. But more interesting is asking Agents to make management decisions for other Agents - letting them decide how to best solve a problem, delegating work to other Agents, evaluating the results, deciding on repeating processes or changing the plan. This is where we are heading!

Last week OpenAI released (or leaked?) a clue about its plans for Agents, called Swarm, calling it “An educational framework exploring ergonomic, lightweight multi-agent orchestration.” Their take on Agents is remarkable lightweight compared to some of the current Agent frameworks. As the big dog in the AI arena, they will pull the entire industry.

Side note - I loved Schmidt’s rhetorical question about what happens when Agents from separate companies start interacting with each other at full speed. The Flash Crash issue has taught us some painful lessons about what happens when automated systems interact in unexpected ways.

AI Writing Its Own Code

As someone who frequently writes code that controls large amounts of information, this scares me the most. Models are incredible at writing code, and good at following instructions. But when those instructions don’t include explicit instructions to avoid large harms - harms that no human coder would even consider - the model can generate dangerous code. As AI Engineers, we try to keep all such situations locked up in a sandbox, safely restricted from affecting vital systems. But as the Agent and code-writing becomes more and more common, it will become easier and easier to leave holes in our sandbox walls. Everyone involved with AI knows about hallucinations - where AI makes things up and says crazy things. But when we allow AI to act on those hallucinations, it gets scary.

Looking Ahead

These are certainly three of the most significant advancements in current Generative AI. I like to focus on them because they are examples of what I can learn to use in different ways, adjusting to the needs of clients and diverse problems. This tech is moving fast and we have to stay agile, continuously learning and adapting to harness the power of AI - responsibly and effectively.

Previous
Previous

AI Translation Project: Making Resources Accessible

Next
Next

Moments from the Denver AI Summit