With the enormous advances in machine learning over the last few years, it feels like we are now not only on the cusp of big things, but that the world is already a genuinely different place than it was just a little while ago. I mean, how does even just assigning homework work in a world of LLMs?
There’s one product I haven’t seen discussed yet that I’m really looking forward to. I call it the Next button. It’s an app one might install on their computer that can access the keyboard and mouse and which puts just a single button in the system tray. When you click it, the model does whatever it thinks is most useful to do next, conditioned on things like the contents of one’s email accounts, calendar, files on my computer and cloud storage, text messages — ideally, any digital content associated with me.
Actions the Next button might take include:
It doesn’t take over any of the actual creative parts of my job — those still require input from me — and at every step there is an opportunity for human review and proofreading. I would ultimately have to take the actions the model proposes. But it sure makes most things a whole lot easier and faster.
I suspect all of this will be possible in the next 12-18 months. Tab completion for your work life is probably coming. The future is going to be weird.