Tech Touchz is a Network about Hacker News, Tech News &Tech Hacks - Latest &Tech News, Hacker News and Hacking Tricks About Technology
WHAT IS NEXT ABOUT COMPUTER?
The computing industry progresses in two mostly independent cycles: financial and product cycles. There has been a lot of handwringing lately about where we are in the financial cycle. Financial markets get a lot of attention. They tend to fluctuate unpredictably and sometimes wildly. The product cycle by comparison gets relatively little attention, even though it is what actually drives the computing industry forward. We can try to understand and predict the product cycle by studying the past and extrapolating into the future.New computing eras have occurred every 10–15 years
Tech product cycles are mutually reinforcing interactions between platforms and applications. New platforms enable new applications, which in turn make the new platforms more valuable, creating a positive feedback loop. Smaller, offshoot tech cycles happen all the time, but every once in a while — historically, about every 10 to 15 years — major new cycles begin that completely reshape the computing landscape.Financial and product cycles evolve mostly independently
The PC enabled entrepreneurs to create word processors, spreadsheets, and many other desktop applications. The internet enabled search engines, e-commerce, e-mail and messaging, social networking, SaaS business applications, and many other services. Smartphones enabled mobile messaging, mobile social networking, and on-demand services like ride sharing. Today, we are in the middle of the mobile era. It is likely that many more mobile innovations are still to come.Each product era can be divided into two phases: 1) the gestation phase, when the new platform is first introduced but is expensive, incomplete, and/or difficult to use, 2) the growth phase, when a new product comes along that solves those problems, kicking off a period of exponential growth.
The Apple II was released in 1977 (and the Altair in 1975), but it was the release of the IBM PC in 1981 that kicked off the PC growth phase PC sales per year (thousands)
The internet’s gestation phase took place in the 80s and early 90s when it was mostly a text-based tool used by academia and government. The release of the Mosaic web browser in 1993 started the growth phase, which has continued ever since.
Worldwide internet users
There were feature phones in the 90s and early smartphones like the Sidekick and Blackberry in the early 2000s, but the smartphone growth phase really started in 2007–8 with the release of the iPhone and then Android. Smartphone adoption has since exploded: about 2B people have smartphones today. By 2020, 80% of the global population will have one.Worldwide smartphone sales per year (millions)
If the 10–15 year pattern repeats itself, the next computing era should enter its growth phase in the next few years. In that scenario, we should already be in the gestation phase. There are a number of important trends in both hardware and software that give us a glimpse into what the next era of computing might be. Here I talk about those trends and then make some suggestions about what the future might look like.
Hardware: small, cheap, and ubiquitous
In the mainframe era, only large organizations could afford a computer. Minicomputers were affordable by smaller organization, PCs by homes and offices, and smartphones by individuals.
Computers are getting steadily smaller
We are now entering an era in which processors and sensors are getting so small and cheap that there will be many more computers than there are people.There are two reasons for this. One is the steady progress of the semiconductor industry over the past 50 years (Moore’s law). The second is what Chris Anderson calls “the peace dividend of the smartphone war”: the runaway success of smartphones led to massive investments in processors and sensors. If you disassemble a modern drone, VR headset, or IoT devices, you’ll find mostly smartphone components.
In the modern semiconductor era, the focus has shifted from standalone CPUs to bundles of specialized chips known as systems-on-a-chip Computer prices have been steadily dropping
Typical systems-on-a-chip bundle energy-efficient ARM CPUs plus specialized chips for graphics processing, communications, power management, video processing, and more.
1 GHz Linux computer for $5
This new architecture has dropped the price of basic computing systems from about $100 to about $10. The Raspberry Pi Zero is a 1 GHz Linux computer that you can buy for $5. For a similar price you can buy a wifi-enabled microcontroller that runs a version of Python. Soon these chips will cost less than a dollar. It will be cost-effective to embed a computer in almost anything.Meanwhile, there are still impressive performance improvements happening in high-end processors. Of particular importance are GPUs (graphics processors), the best of which are made by Nvidia. GPUs are useful not only for traditional graphics processing, but also for machine learning algorithms and virtual/augmented reality devices. Nvidia’s roadmap promises significant performance improvements in the coming years.
Google’s quantum computer
A wildcard technology is quantum computing, which today exists mostly in laboratories but if made commercially viable could lead to orders-of-magnitude performance improvements for certain classes of algorithms in fields like biology and artificial intelligence.Software: the golden age of AI
There are many exciting things happening in software today. Distributed systems is one good example. As the number of devices has grown exponentially, it has become increasingly important to 1) parallelize tasks across multiple machines 2) communicate and coordinate among devices. Interesting distributed systems technologies include systems like Hadoop and Spark for parallelizing big data problems, and Bitcoin/blockchain for securing data and assets.
But perhaps the most exciting software breakthroughs are happening in artificial intelligence (AI). AI has a long history of hype and disappointment. Alan Turing himself predicted that machines would be able to successfully imitate humans by the year 2000. However, there are good reasons to think that AI might now finally be entering a golden age.
“Machine learning is a core, transformative way by which we’re rethinking everything we’re doing good. Google CEO, Sundar Pichai
A lot of the excitement in AI has focused on deep learning, a machine learning technique that was popularized by a now famous 2012 Google project that used a giant cluster of computers to learn to identify cats in YouTube videos. Deep learning is a descendent of neural networks, a technology that dates back to the 1940s. It was brought back to life by a combination of factors, including new algorithms, cheap parallel computation, and the widespread availability of large data sets.
ImageNet challenge error rates (red line = human performance)
It’s tempting to dismiss deep learning as another Silicon Valley buzzword. The excitement, however, is supported by impressive theoretical and real-world results. For example, the error rates for the winners of the ImageNet challenge — a popular machine vision contest — were in the 20–30% range prior to the use of deep learning. Using deep learning, the accuracy of the winning algorithms has steadily improved, and in 2015 surpassed human performance.Many of the papers, data sets, and software tools related to deep learning have been open sourced. This has had a democratizing effect, allowing individuals and small organizations to build powerful applications. WhatsApp was able to build a global messaging system that served 900M users with just 50 engineers, compared to the thousands of engineers that were needed for prior generations of messaging systems. This “WhatsApp effect” is now happening in AI. Software tools like Theano and TensorFlow, combined with cloud data centers for training, and inexpensive GPUs for deployment, allow small teams of engineers to build state-of-the-art AI systems.
For example, here a solo programmer working on a side project used TensorFlow to colorize black-and-white photos: