Categories: Technology

Deep Learning’s Little-Known Debt to The Innovator’s Dilemma

[ad_1]

In 1997, Harvard Business School professor Clayton Christensen created a sensation among venture capitalists and entrepreneurs with his book The Innovator’s Dilemma. The lesson that most people remember from it is that a well-run business can’t afford to switch to a new approach—one that ultimately will replace its current business model—until it is too late.

One of the most famous examples of this conundrum involved photography. The large, very profitable companies that made film for cameras knew in the mid-1990s that digital photography would be the future, but there was never really a good time for them to make the switch. At almost any point they would have lost money. So what happened, of course, was that they were displaced by new companies making digital cameras. (Yes, Fujifilm did survive, but the transition was not pretty, and it involved an improbable series of events, machinations, and radical changes.)


A second lesson from Christensen’s book is less well remembered but is an integral part of the story. The new companies springing up might get by for years with a disastrously less capable technology. Some of them, nevertheless, survive by finding a new niche they can fill that the incumbents cannot. That is where they quietly grow their capabilities.

For example, the early digital cameras had much lower resolution than film cameras, but they were also much smaller. I used to carry one on my key chain in my pocket and take photos of the participants in every meeting I had. The resolution was way too low to record stunning vacation vistas, but it was good enough to augment my poor memory for faces.

This lesson also applies to research. A great example of an underperforming new approach was the second wave of neural networks during the 1980s and 1990s that would eventually revolutionize artificial intelligence starting around 2010.

Neural networks of various sorts had been studied as mechanisms for machine learning since the early 1950s, but they weren’t very good at learning interesting things.

In 1979, Kunihiko Fukushima first published his research on something he called shift-invariant neural networks, which enabled his self-organizing networks to learn to classify handwritten digits wherever they were in an image. Then, in the 1980s, a technique called backpropagation was rediscovered; it allowed for a form of supervised learning in which the network was told what the right answer should be. In 1989, Yann LeCun combined backpropagation with Fuksuhima’s ideas into something that has come to be known as convolutional neural networks (CNNs). LeCun, too, concentrated on images of handwritten digits.

In 2012, the poor cousin of computer vision triumphed, and it completely changed the field of AI.

Over the next 10 years, the U.S. National Institute of Standards and Technology (NIST) came up with a database, which was modified by LeCun, consisting of 60,000 training digits and 10,000 test digits. This standard test database, called MNIST, allowed researchers to precisely measure and compare the effectiveness of different improvements to CNNs. There was a lot of progress, but CNNs were no match for the entrenched AI methods in computer vision when applied to arbitrary images generated by early self-driving cars or industrial robots.

But during the 2000s, more and more learning techniques and algorithmic improvements were added to CNNs, leading to what is now known as deep learning. In 2012, suddenly, and seemingly out of nowhere, deep learning outperformed the standard computer vision algorithms in a set of test images of objects, known as ImageNet. The poor cousin of computer vision triumphed, and it completely changed the field of AI.

A small number of people had labored for decades and surprised everyone. Congratulations to all of them, both well known and not so well known.

But beware. The message of Christensen’s book is that such disruptions never stop. Those standing tall today will be surprised by new methods that they have not begun to consider. There are small groups of renegades trying all sorts of new things, and some of them, too, are willing to labor quietly and against all odds for decades. One of those groups will someday surprise us all.

I love this aspect of technological and scientific disruption. It is what makes us humans great. And dangerous.

This article appears in the July 2022 print issue as “The Other Side of The Innovator’s Dilemma.”

[ad_2]
Source link
Admin

Recent Posts

Kijangwin is the latest online video gaming provider

Kijangwin is your brand-new go-to destination for all things internet gaming. Whether you're an informal…

1 day ago

How to Style Trendy Clothes Effortlessly

Hey there, fashion enthusiasts! Are you ready to dive into the world of trendy clothes…

2 days ago

How to effectively recover your frozen/stolen funds from fraudulent platforms

Hey there! If you're reading this, there's a good chance you've found yourself in the…

3 days ago

Important things about Core 2 . 0 regarding Hemp Users

Hey there, hemp enthusiasts! If you've been on the hunt for the next big thing…

5 days ago

Exploring the Features and Benefits of Strio

Hey there! Have you ever found yourself tangled up in the world of communication and…

1 week ago

The Importance of Pre-Sale Pest Control: Ensuring a Smooth Home Transaction

Are you worried that hidden critters might derail your home sale? Selling a house can…

1 week ago