The General Purpose Pendulum – O’Reilly
[ad_1]
Pendulums do what they do: they swing one way, then they swing back the other way. Some oscillate quickly; some slowly; and some so slowly you can watch the earth rotate underneath them. It’s a cliche to talk about any technical trend as a “pendulum,” though it’s accurate often enough.
We may be watching one of computing’s longest-term trends turn around, becoming the technological equivalent of Foucault’s very long, slow pendulum: the trend towards generalization. That trend has been swinging in the same direction for some 70 years–since the invention of computers, really. The first computers were just calculating engines designed for specific purposes: breaking codes (in the case of Britain’s Bombe) or calculating missile trajectories. But those primitive computers soon got the ability to store programs, making them much more flexible; eventually, they became “general purpose” (i.e., business) computers. If you’ve ever seen a manual for the IBM 360’s machine language, you’ll see many instructions that only make sense in a business context–for example, instructions for arithmetic in binary coded decimal.
That was just the beginning. In the 70s, word processors started replacing typewriters. Word processors were essentially early personal computers designed for typing–and they were quickly replaced by personal computers themselves. With the invention of email, computers became communications devices. With file sharing software like Napster and MP3 players like WinAmp, computers started replacing radios–then, when Netflix started streaming, televisions. CD and DVD players are inflexible, task-specific computers, much like word processors or the Bombe, and their functions have been subsumed by general-purpose machines.
The trend towards generalization also took place within software. Sometime around the turn of the millenium, many of us realized the Web browsers (yes, even the early Mosaic, Netscape, and Internet Explorer) could be used as a general user interface for software; all a program had to do was express its user interface in HTML (using forms for user input), and provide a web server so the browser could display the page. It’s not an accident that Java was perhaps the last programming language to have a graphical user interface (GUI) library; other languages that appeared at roughly the same time (Python and Ruby, for example) never needed one.
If we look at hardware, machines have gotten faster and faster–and more flexible in the process. I’ve already mentioned the appearance of instructions specifically for “business” in the IBM 360. GPUs are specialized hardware for high-speed computation and graphics; however, they’re much less specialized than their ancestors, dedicated vector processors. Smartphones and tablets are essentially personal computers in a different form factor, and they have performance specs that beat supercomputers from the 1990s. And they’re also cameras, radios, televisions, game consoles, and even credit cards.
So, why do I think this pendulum might start swinging the other way? A recent article in the Financial Times, Big Tech Raises its Bets on Chips, notes that Google and Amazon have both developed custom chips for use in their clouds. It hypothesizes that the next generation of hardware will be one in which chip development is integrated more closely into a wider strategy. More specifically, “the best hope of producing new leaps forward in speed and performance lies in the co-design of hardware, software and neural networks.” Co-design sounds like designing hardware that is highly optimized for running neural networks, designing neural networks that are a good match for that specific hardware, and designing programming languages and tools for that specific combination of hardware and neural network. Rather than taking place sequentially (hardware first, then programming tools, then application software), all of these activities take place concurrently, informing each other. That sounds like a turn away from general-purpose hardware, at least superficially: the resulting chips will be good at doing one thing extremely well. It’s also worth noting that, while there is a lot of interest in quantum computing, quantum computers will inevitably be specialized processors attached to conventional computers. There is no reason to believe that a quantum computer can (or should) run general purpose software such as software that renders video streams, or software that calculates spreadsheets. Quantum computers will be a big part of our future–but not in a general-purpose way. Both co-design and quantum computing step away from general-purpose computing hardware. We’ve come to the end of Moore’s Law, and can’t expect further speedups from hardware itself. We can expect improved performance by optimizing our hardware for a specific task.
Co-design of hardware, software, and neural networks will inevitably bring a new generation of tools to software development. What will those tools be? Our current development environments don’t require programmers to know much (if anything) about the hardware. Assembly language programming is a specialty that’s really only important for embedded systems (and not all of them) and a few applications that require the utmost in performance. In the world of co-design, will programmers need to know more about hardware? Or will a new generation of tools abstract the hardware away, even as they weave the hardware and the software together even more intimately? I can certainly imagine tools with modules for different kinds of neural network architectures; they might know about the kind of data the processor is expected to deal with; they might even allow a kind of “pre-training”–something that could ultimately give you GPT-3 on a chip. (Well, maybe not on a chip. Maybe a few thousand chips designed for some distributed computing architecture.) Will it be possible for a programmer to say “This is the kind of neural network I want, and this is how I want to program it,” and let the tool do the rest? If that sounds like a pipe-dream, realize that tools like GitHub Copilot are already automating programming.
Chip design is the poster child for “the first unit costs 10 billion dollars; the rest are all a penny apiece.” That has limited chip design to well-financed companies that are either in the business of selling chips (like Intel and AMD) or that have specialized needs and can buy in very large quantities themselves (like Amazon and Google). Is that where it will stop–increasing the imbalance of power between a few wealthy companies and everyone else–or will co-design eventually enable smaller companies (and maybe even individuals) to build custom processors? To me, co-design doesn’t make sense if it’s limited to the world’s Amazons and Googles. They can already design custom chips. It’s expensive, but that expense is itself a moat that competitors will find hard to cross. Co-design is about improved performance, yes; but as I’ve said, it’s also inevitably about improved tools. Will those tools result in better access to semiconductor fabrication facilities?
We’ve seen that kind of transition before. Designing and making printed circuit boards used to be hard. I tried it once in high school; it requires acids and chemicals you don’t want to deal with, and a hobbyist definitely can’t do it in volume. But now, it’s easy: you design a circuit with a free tool like Kicad or Fritzing, have the tool generate a board layout, send the layout to a vendor through a web interface, and a few days later, a package arrives with your circuit boards. If you want, you can have the vendor source the board’s components and solder them in place for you. It costs a few tens of dollars, not thousands. Can the same thing happen at the chip level? It hasn’t yet. We’ve thought that field-programmable gate arrays might eventually democratize chip design, and to a limited extent, they have. FPGAs aren’t hard for small- or mid-sized businesses that can afford a few hardware engineers, but they’re far from universal, and they definitely haven’t made it to hobbyists or individuals. Furthermore, FPGAs are still standardized (generalized) components; they don’t democratize the semiconductor fabrication plant.
What would “cloud computing” look like in a co-designed world? Let’s say that a mid-sized company designs a chip that implements a specialized language model, perhaps something like O’Reilly Answers. Would they have to run this chip on their own hardware, in their own datacenter? Or would they be able to ship these chips to Amazon or Google for installation in their AWS and GCP data centers? That would require a lot of work standardizing the interface to the chip, but it’s not inconceivable. As part of this evolution, the co-design software will probably end up running in someone’s cloud (much as AWS Sagemaker does today), and it will “know” how to build devices that run on the cloud provider’s infrastructure. The future of cloud computing might be running custom hardware.
We inevitably have to ask what this will mean for users: for those who will use the online services and physical devices that these technologies enable. We may be seeing that pendulum swing back towards specialized devices. A product like Sonos speakers is essentially a re-specialization of the device that was formerly a stereo system, then became a computer. And while I (once) lamented the idea that we’d eventually all wear jackets with innumerable pockets filled with different gadgets (iPods, i-Android-phones, Fitbits, Yubikeys, a collection of dongles and earpods, you name it), some of those products make sense: I lament the loss of the iPod, as distinct from the general purpose phone. A tiny device that could carry a large library of music, and do nothing else, was (and would still be) a wonder.
But those re-specialized devices will also change. A Sonos speaker is more specialized than a laptop plugged into an amp via the headphone jack and playing an MP3; but don’t mistake it for a 1980s stereo, either. If inexpensive, high-performance AI becomes commonplace, we can expect a new generation of exceedingly smart devices. That means voice control that really works (maybe even for those who speak with an accent), locks that can identify people accurately regardless of skin color, and appliances that can diagnose themselves and call a repairman when they need to be fixed. (I’ve always wanted a furnace that could notify my service contractor when it breaks at 2AM.) Putting intelligence on a local device could improve privacy–the device wouldn’t need to send as much data back to the mothership for processing. (We’re already seeing this on Android phones.) We might get autonomous vehicles that communicate with each other to optimize traffic patterns. We might go beyond voice controlled devices to non-invasive brain control. (Elon Musk’s Neuralink has the right idea, but few people will want sensors surgically embedded in their brains.)
And finally, as I write this, I realize that I’m writing on a laptop–but I don’t want a better laptop. With enough intelligence, would it be possible to build environments that are aware of what I want to do? And offer me the right tools when I want them (possibly something like Bret Victor’s Dynamicland)? After all, we don’t really want computers. We want “bicycles for the mind”–but in the end, Steve Jobs only gave us computers.
That’s a big vision that will require embedded AI throughout. It will require lots of very specialized AI processors that have been optimized for performance and power consumption. Creating those specialized processors will require re-thinking how we design chips. Will that be co-design, designing the neural network, the processor, and the software together, as a single piece? Possibly. It will require a new way of thinking about tools for programming–but if we can build the right kind of tooling, “possibly” will become a certainty.
[ad_2]
Source link