2024-01 Release Notes

January 31, 2024

2024-01 Release Notes

High Dimensional Research spent much of 2023 iterating our core product offering toward increasingly specific objectives: we started out by trying to optimize for what excited us about what we wanted to do with AI, but over time, those exciting ideas had technical prerequisites that weren't met by anyone yet. It became apparent that we should gear ourselves toward those very same prerequisites, lest we simply reinvent some rudimentary solutions to stand up lofty ideas. Over time, we worked our way down to face the core problems of inter-model coordination, inter-model memory, and inter-model reflexivity.

Why these problems? It's obvious in retrospect that the brain of an animal organizes and reinterprets incoming sensory information through systems that developed on the scale of millions of years. For some reason, we expect non-human intelligences to not have a learning period, to work now, to be adapted to their environments now.

And one of the most common tasks people want models to face involves interaction with the internet. As a single piece of software for autonomous agents, a web browser is easy to implement and difficult to implement well. You can write your browser around vision models at a slow and expensive clip with limited interaction, or you can try to parse the DOM for an intelligence without eyes or human reflexive cognition. We believed that the answer lay in an index of something we call a "reflexive cognitive process." Memory Index, as described on our product page, sits between a passthrough model and the browser. It preprocesses web content for legibility and it introduces something analogous to "muscle memory": Memory Index anonymizes those sessions, ignores any personal or sensitive information, and creates a bank of possible actions collectively taken across all sessions.

We began by getting infrastructure in place to support a core loop between model and controller. We finished our first version of that infrastructure last November. Then we began work on Memory Index, and finally shipped it at the start of the year.

We believe, over time, that we can continually improve success rates for autonomous tasks performed in-browser through use and improvements of Memory Index and the whole product suite. To track it, last week we also deployed a benchmarking suite (and wrote a post about it) and implemented evaluators for benchmark tasks.

We also implemented some network comparison algorithms for comparing website structures, and altered the behavior behind our passthrough model endpoint consumption; we include inner page HTML into model messages and slice text into appropriately-sized messages consistently.

Finally, as part of this general slate of work, we also began to assess and record successful tasks into Memory Index. We believe that we have now laid the groundwork for assessing where we can improve and expand our work in terms of extending model agency on the internet, and we encourage you, if you are not building, testing our services with us, or using our services in the areas we are confident they excel, to visit from time to time and check in on our progress.