To quote Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans (which I recommend you check out), “As a career choice, ‘futurist’ is nice work if you can get it. You write books making predictions that can’t be evaluated for decades and whose ultimate validity won’t affect your reputation — or your book sales — in the here and now.”
So while I’m not sure I belong in the category of futurist, I was asked to contribute to the book, After Shock: The World’s Foremost Futurists Reflect on 50 Years of Future Shock_and Look Ahead to the Next 50. Marking the 50-year anniversary of the landmark book Future Shock, After Shock is a collection of essays, which share views of the future through the unique lenses of more than 100 thought-leading futurists spanning Ray Kurzweil to Lord Martin Rees, and featuring a foreword by Deb Westphal, Chairman of the Board of Toffler Associates.
If you’re interested in what I had to say just download the chapter.
www.medium.com
Based on an executive workshop held in Minneapolis, MN
If you’re the CEO or board member of a company that manufactures any healthcare, construction, agriculture, power generation, pharmaceutical or industrial machine you’ve probably heard about IoT, edge, AI, 5G and cloud computing. But why should you care? Why should your company care?
While finding ways to use technology to save money is always good, the bigger driver is using software to increase revenue. I’ll make the case as the manufacturer of construction, packaging, oil, gas, healthcare or transportation machines, you can double your revenues and quadruple your margins by building and selling digital service products. Furthermore, you’ll create a barrier that your competition will find difficult to cross.
Next-generation machines are increasingly powered by software. Porsche’s latest Panamera has 100 million lines of code (a measure of the amount of software) up from only two million lines in the previous generation. Tesla owners have come to expect new features delivered through software updates to their vehicles. A software-defined automobile is the first car that will end its life with more features than it began. But it’s not only cars, healthcare machines are also becoming more software defined. A drug-infusion pump may have more than 200,000 lines of code, and an MRI scanner more than 7,000,000. A modern boom lift — commonly used on construction sites — has 40 sensors and three million lines of code, and a farm’s combine harvester has over five million. Of course, we can debate if this is a good measure of software, but I think you get the point: machines are increasingly software defined.
So, if machines are becoming more software defined, then the business models that applied to the world of software may also apply to the world of machines. In the rest of this article we’ll cover three business models.
Early on in the software industry we created products and sold them on a CD; if you wanted the next product, you’d have to buy the next CD. As software products became more complex, companies like Oracle and SAP moved to a business model where you bought the product (e.g., ERP or database) together with a service contract. That service contract was priced at roughly 2% of the purchase price of the product per month. Over time, this became the largest and most profitable component of many enterprise software product companies. In the year before Oracle bought Sun Microsystems (when they were still a pure software business), they had revenues of approximately $15B, only $3B of which was product revenue, the other $12B (over 80%) was high-margin, recurring-service revenue.
But what is service? Is service answering the phone nicely from Bangalore? Is it flipping burgers at McDonald’s? The simple answer is no. Service is the delivery of information that is personal and relevant to you. That could be the hotel concierge telling you where to get the best Szechwan Chinese food in walking distance, or your doctor telling you that, based on your genome and lifestyle, you should be on Lipitor. Service is personal and relevant information.
I’ve heard many executives of companies who make machines say, “Our customers won’t pay for service.” Well, of course, if you think service is break-fix, then the customer clearly thinks you should build a reliable product. Remember Oracle’s service revenue? In 2004, the Oracle Support organization studied the 100 million requests for services from Oracle support and over 99.9% of those requests were answered with known information. Aggregating information for thousands of different uses of the software, even in a disconnected state, represented huge value over the knowledge of a single person in a single location. Service is not break-fix. Service is personal and relevant information about how to maintain or optimize the availability, performance or security of the product. All delivered in time and on time.
The next major step in software business models was to connect to the computers that ran the software. This enabled even more personal and more relevant information on how to maintain or optimize the performance, availability and security of the software product. These digital services are designed to assist IT workers in maintaining or optimizing the product (e.g., database, middleware, financial application). For example, knowing the current patch level of the software enables the service to recommend only those relevant security patches be applied. Traditional software companies charge between 2 and 3% of the product price per month for a connected digital service. The advantage of this model is the ability to target the installed base of enterprises, which have purchased the product in the traditional Model 1.
Now let’s move to the world of machines. If a company knows both the model number and current configuration of the machine, as well as the time-series data coming from hundreds of sensors, then the digital service can be even more personal and relevant and allows the company to provide precision assistants for workers who maintain or optimize the performance, availability and security of the healthcare, agriculture, construction, transportation or water purification machine.
Furthermore, assume you build this digital service product and price it at just 1% of the purchase price of the product per month. If your company sells a machine for $200K and you had an installed base of 4,000 connected machines, you could generate $100M of high-margin, annual recurring revenue. And since digital service margins can be much bigger than product margins, companies who have moved to just 50/50 models (50% service, and 50% product) have seen their margins quadruple.
While this business model has been aggressively deployed in high tech, we are still in the early days with machine manufacturers. There are some early leaders. Companies like GE and a major elevator supplier derive 50% of their revenue from service. Voltas, a large HVAC manufacturer, is an 80/20 company — meaning they derive 20% of their revenue from services. In the healthcare area Abbott has introduced a digital service product called AlinIQ and Ortho Clinical is selling Ortho Care as an annual subscription service. While some of this is lower margin, human-powered, disconnected services the value of a recurring revenue stream is not lost on the early leaders.
Once you can tell the worker how to maintain or optimize the security, availability or performance of the product, the next step is to simply take over that responsibility as the builder of the product. Over the last fifteen years we’ve seen the rise of Software-as-a-Service companies (SaaS) such as Salesforce.com, Workday and Blackbaud, which all deliver their products as a service. In the past seven years this has also happened with server hardware and storage products as companies like Amazon, Microsoft and Google provide compute and storage products as a service.
All of these new product-as-a-service companies have also changed the pricing to a per-transaction, per-seat, per-instance, per-month or per-year model. We’re likely to see the same with agricultural, construction, transportation and healthcare machines. Again there are some early examples Kaeser compressor is delivering air-as-service and AGCO is selling sugar cane harvesters by the bushel harvested. In the consumer world we’re all familiar with are Uber and Lyft, which provide transportation machines as a service — priced per ride. Of course, the most expensive operating cost of the ride is the human labor, so like those of us in high-tech software and hardware products, they are looking at replacing the human labor with automation.
So why should you care about IoT, edge, 5G, AI and cloud computing? Not because they are cool technologies, but because they will enable you to double your topline revenues and quadruple your margins with high quality recurring revenue. And by the way, all the while building a widening gap with your competition.
For more detail see the five keys to building digital service products and selling them. In addition the 160-page workbook from the executive workshop is available for purchase.
www.medium.com
I’ve been wondering for a while what might be next for enterprise software. Whether a small private or large public company, where should you invest our time and money?
Maybe looking into the past can give us some guidance. Enterprise software has gone through three distinct eras. In the 1st era, infrastructure software companies emerged like Microsoft and Oracle, which focused on programmers. Software developers used Microsoft Visual Basic and the Oracle database to build custom workflow applications for the enterprise throughout the 90s. By the late 90s the 2nd era of enterprise software began with the creation of packaged on-premises enterprise workflow application. Companies emerged including PeopleSoft, Siebel, SAP and Oracle. These applications focused on automating key workflows like order-to-cash, purchase-to-pay or hire-to-fire. Enterprises didn’t need to hire programmers to develop these workflow applications, they only needed to buy them, implement and manage them. The 3rd era began in the 2000s with the delivery of packaged workflow applications as a cloud service. Examples abound including Salesforce, Workday, Blackbaud and ServiceNow. This 3rd era eliminated the need for the enterprise to hire operations people to manage the applications and has accelerated the adoption of packaged enterprise workflow applications. While you could still hire programmers to write a CRM application, and operations people to manage it, why would you?
Let’s now switch our attention to analytics, which is not focused on automating a process, but instead on learning from the data to discover deeper insights, make predictions, or generate recommendations. Analytics has been populated with companies specializing in the management of the data (e.g. MongoDB, Teradata, Splunk, Cloudera, Snowflake, Azure SQL, Google Big Query, Amazon RedShift ); companies dedicated to providing tools for developers or business analysts (e.g., SAS, Tableau, Qlik and Pivotal) as well as software for data engineers including formerly public companies such as Mulesoft (acquired by Salesforce) and Informatica (acquired by Permira).
Furthermore, thanks to the innovations in the consumer Internet e.g., Facebook facial recognition, Google Translate, Amazon Alexa, there are now 100s of open source software and cloud services available which provide a wide array of AI and analytic infrastructure software building blocks. For those interested in geeking out, here is a brief introduction. Some of this technology will be dramatically lower cost. Consider today for about $1000 I can get 1,000 servers for 48 hours to go thru a training cycle to build a machine learning model.
I’m going to use the label AI to refer to the entire spectrum of analytic infrastructure technology, and also because it sounds cooler. Today we are largely in the 1st era. The software industry is providing AI infrastructure software and requiring the enterprise to hire the programmers, ML experts to build the application as well as dev ops people to manage the deployment. This is nearly the same as the 1st era of enterprise workflow software.
If we’re to follow the same sequence as workflow applications we need to move beyond the 1st era focused on developers and start building enterprise AI applications.
So what is an enterprise AI application?
Enterprise AI applications serve the worker not the software developer or business analysts. The worker might be an fraud detection specialist, a pediatric cardiologist or a construction site manager.
Enterprise AI applications have millennial UIs and are built for mobile devices, augmented reality and voice interaction.
Enterprise AI applications use historical data. Most enterprise workflow applications eliminate data once the workflow or the transaction completes.
Enterprise AI applications use lots of data. Jeff Dean has taught us with more data and more compute we can achieve near linear accuracy improvements.
Enterprise AI applications use many heterogeneous data sources inside and outside the enterprise to discover deeper insights, make predictions, or generate recommendations and learn from experience.
A good example of a consumer AI application is Google Search. It’s an application focused on the worker, not the developer, with a millennial UI and uses many heterogeneous data sources. Open the hood and you’ll see a ton of infrastructure software technology inside. So what are the challenges of building enterprise AI applications?
1. The nice thing about transactional or workflow applications is the processes they automate are well defined, and follow some standards. Thus, there is a finite universe of these apps. Enterprise AI applications will be much more diverse and serve workers as different as the service specialist for a combine-harvester, a radiologist or the manager of an off shore oil drilling rig.
2. The application development teams will be staffed differently. Teams will have a range of expertise including business analysts, domain specialists, data scientists, data engineers, devops specialist and programmers. With such a wide array of cloud-based software even programming will look different.
3. Finally the development of these analytic applications will require a different methodology than was used to build workflow application. In workflow applications we can judge whether the software worked correctly or not. In enterprise AI applications we’ll have to learn the definition of a ROC curve and determine what level of false-positives and false negatives we’re willing to tolerate.
Some companies are emerging to serve the developer including Teradata and C3 as well as the compute & storage cloud service providers, Microsoft, Google and Amazon. While there is plenty of room for creating custom enterprise AI applications, the true beginning of the next era will be the emergence of packaged AI applications. There are beginning to be some examples. Visier, founded by John Schwartz, the former CEO of Business Objects, has built a packaged applications focused on the HR worker. Yotascale has chosen to focus on the IT worker who is managing complex cloud infrastructure. Welline built a packaged enterprise AI application for the petro-technical engineers in the oil & gas industry using the Maanaplatform. Lecida, founded by some of my former Stanford students, is delivering a collaborative intelligence application for workers who manage industrial (construction, pharma, chemical, utility..) machines. They are using AI technology to make machines smart enough to “talk” with human experts, when they need to. Those models are built in less than 48 hours using a ton of software technology.
In order for data to be the new oil, we need to begin the next era and start building custom or packaged enterprise AI applications. These applications serve the worker not the software developer or business analysts. The worker might be a reliability engineer, a pediatric endocrinologist or a building manager. Enterprise AI applications will have millennial UIs built for mobile devices, augmented reality and voice. And these applications will use the oceans of data coming from both the Internet of People and the Internet of Things to discover deeper insights, make predictions, or generate recommendations. We need to move beyond infrastructure to applications.
www.medium.com
It’s no secret that over the past 4 years there have been dramatic improvements in the usage of AI technology to recognize images, translate text, win the game of Go or talk to us in the kitchen. Whether it’s Google Translate, Facebook facial recognition or Amazon’s Alexa these innovations have largely been focused on the consumer.
On the enterprise side progress has been much slower. We’ve all been focused on building data lakes (whatever that is), and trying to hire data scientists and machine learning experts. While this is fine, we need to get started building enterprise AI applications. Enterprise AI applications serve the worker not the software developer or business analysts. The worker might be an fraud detection specialist, a pediatric cardiologist or a construction site manager. Enterprise AI applications leverage the amazing amount of software that has been developed for the consumer world. These applications have millennial UIs and are built for mobile devices, augmented reality and voice interaction. Enterprise AI applications use many heterogeneous data sources inside and outside the enterprise to discover deeper insights, make predictions, or generate recommendations. A good example from the consumer world is Google Search. It’s an application focused on the worker, not the developer, with a millennial UI and uses many heterogeneous data sources. Open up the hood and you’ll see a ton of software technology inside.
With the advent of cloud computing, and continued development of open source software, building application software in the past 5 years has changed dramatically. It might be as dramatic as moving from ancient mud brick to modern prefab construction. As you’ll see we have a ton of software technology that’s become available. Whether you’re an enterprise building a custom application, or a new venture building a packaged application, you’ll need to do three things.
1. Define the use-case. Define the application. Who is the worker? Is it an HR professional, reliability engineer or a pediatric cardiologist?
2. The Internet is the platform. Choose wisely. We’ll discuss this more in depth in this article.
3. Hire the right team. The teams will have a range of expertise including business analysts, domain experts, data scientists, data engineers, devops specialist and programmers.
For enterprises that are considering building scalable, enterprise-grade AI applications it’s never been a better time — there are hundreds of choices, many inspired by innovations in the consumer Internet. To understand the breadth I’ve arbitrarily created sixteen different categories, with a brief description and some example products. We’ll mix both open source software, which can run on any compute and storage cloud service along with managed cloud services.
1. Compute & Storage Cloud Services provide compute and storage resources on demand, managed by the provider of the service. While you could build your application using on-premises compute & storage, it would both increase the number of technology decisions and raise the overall upfront cost both in capital equipment and people to manage the resources. Furthermore the ability to put a 1000 servers to work for 48 hours for less than a $1000 is an economic model unachievable in the on-premises world. Choices include but are not limited to AWS, Google Cloud, Microsoft Azure, Rackspace, IBM Cloud, AliCloud.
2. Container Orchestration. VMWare pioneered the ability to create virtual hardware machines, but VMs are heavyweight and non-portable. Modern AI applications are using containers based on OS-level virtualization rather than hardware virtualization. They are easier to build than VMs, and because they are decoupled from the underlying infrastructure and from the host file system, they are portable across clouds and OS distributions. Container orchestration orchestrates computing, networking, and storage infrastructure on behalf of user workloads. Choices include but are not limited to Kubernetes, Mesos, Swarm, Rancher and Nomad.
3. Batch Data Processing. As data set sizes get larger, an application needs a way to efficiently process large datasets. Instead of using one big computer to process and store the data, modern batch data processing software allows clustering commodity hardware together to analyze large data sets in parallel. Choices include but are not limited to Spark, Databricks, Cloudera, Hortonworks, AWS EMR and MapR.
4. Stream Data Processing. An AI application, which is designed to interact with near real-time data, will need streaming data processing software. Streaming data processing software has three key capabilities: publish and subscribe to streams of records; store streams of records in a fault-tolerant durable way and finally the ability to process streams of records as they occur. Choices include but are not limited to Spark Streaming, Storm, Flink, Apex, Samza, IBM Streams.
5. Software Provisioning. From traditional bare metal to serverless, automating the provisioning of any infrastructure is the first step in automating the operational life cycle of your application. Software provisioning frameworks are designed to provision the latest cloud platforms, virtualized hosts and hypervisors, network devices and bare-metal servers. Software provisioning provides the connecting tool in any of your process pipelines. Choices include but are not limited to Ansible, Salt, Puppet, Chef, Terraform, Troposphere, AWS CloudFormation, Docker Suite, Serverless and Vagrant.
6. IT Data Collect. Historically, many IT applications were built on SQL databases. Any analytic application will need the ability to collect data from a variety of SQL data sources. Choices include but are not limited to Teradata, Postgres, MongoDB, Microsoft SQL Server and Oracle.
7. OT Data Collect. For analytic applications involving sensor data, there will be the need to collect and process time-series data. Products include traditional historians such as AspenTech InfoPlus.21, OSISoft’s PI, Schneider’s Wonderware and traditional database technologies extended for time-series such as Oracle. For newer applications product choices include but are not limited to InfluxDB.Cassandra, PostgreSQL, TimescaleDB, OpenTSDB.
8. Message Broker. A message broker is a program that translates a message from a messaging protocol of the sender, to a messaging protocol of the receiver. This means that when you have a lot of messages coming from hundreds of thousands to millions of end points, you’ll need a message broker to create a centralized store/processor for these messages. Choices include but are not limited to Kafka, Kinesis, RabbitMQ, Celery, Redis and MQTT.
9. Data Pipeline Orchestation. Data engineers create data pipelines to orchestrate the movement, transformation, validation, and loading of data, from source to final destination. Data pipeline orchestration software allows you to identify the collection of all the tasks you want to run, organized in a way that reflects their relationships and dependencies. Choices include but are not limited to Airflow, Luigi, Oozie, Conductor and Nifi.
10. Performance Monitoring. Performance of any application, including analytic applications requires real time performance monitoring to determine bottlenecks and ultimately be able to predict performance. Choices include but are not limited to Datadog, AWS Cloudwatch, Prometheus, New Relic and Yotascale.
11. CI/CD. Continuous integration (CI) and continuous delivery (CD) software enables a set of operating principles, and collection of practices that enable analytic application development teams to deliver code changes more frequently and reliably. The implementation is also known as the CI/CD pipeline and is one of the best practices for devops teams to implement. Choices include but are not limited to Jenkins, Circle CI, Bamboo, Semaphore CI and Travis.
12. Backend Framework. Backend frameworks consist of languages and tools used in server-side programming in an analytic application development environment. A backend framework is designed to speed the development of the application by providing a higher-level programming interface to design data models, handle web requests, and other commonly required features. Choices include but are not limited to Flask, Django, Pyramid, Dropwizard, Elixir and Rails.
13. Front-end Frameworks. Applications need a user interface. There are numerous front end frameworks used for building user interfaces. These front end frameworks as a base in the development of single-page or mobile applications. Choices include, but are not limited to Vue, Meteor, React, Angular, jQuery, Ember, Polymer, Aurelia, Bootstrap, Material UI and Semantic UI
14. Data Visualization. An analytic application needs plotting software to produce publication quality figures in a variety of hard-copy formats and interactive environments across platforms. Using a data visualization software allows you can generate plots, histograms, power spectra, bar charts, error charts, scatter plots, etc., with just a few lines of code. Choices include, but are not limited to Tableau, PowerBI, Matplotlib, d3, VX, react-timeseries-chart, Bokeh, seaborn, plotly, Kibana and Grafana.
15. Data Science. Data science tools allow you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, and support for large, multi-dimensional arrays and matrices. Choices include, but are not limited to Python, R, SciPy, NumPy, Pandas, NetworkX, Numba, SymPy, Jupyter Notebook, Jupyter Labs.
16. Machine Learning. Machine learning frameworks provide useful abstractions to reduce amounts of boilerplate code and speed up deep learning model development. ML frameworks are useful for building feed-forward networks, convolutional networks as well as recurrent neural networks. Choices include, but are not limited to Python, R, TensorFlow, Scikit-learn, PyTorch, Spark MLlib, Spark ML, Keras, CNTK, DyNet, Amazon Machine Learning, Caffe, Azure ML Studio, Apache MXNet and MLflow.
If you’re curious check out some of the product choices Uber made.
We need t0 begin the next era of enterprise software and start to build custom or packaged enterprise AI applications. Applications that serve the workers, not developers; have millennial UIs and use the oceans of data coming from both the Internet of People and the Internet of Things. Luckily many of the infrastructure building blocks are now here, so stop using those mud bricks.
www.medium.com
A few months ago I had breakfast at Joannie’s Café in Palo Alto with the CEO of a company that builds machines for the semiconductor industry. I asked him how many machines he had in the field and he said around 10–20,000. The precision of his answer should have been my first clue. I went on to ask him, “How much service revenue do you generate?”. To which he responded with the universal sign of a goose egg. I asked “Why zero?”, to which he replied “No one wants to pay for service”.
Of course the reason no one pays for service is he’s defined service as break-fix support. Of course anyone who has just bought a $250,000 machine would assume it would work, so why pay for service?
The US economy is 85+% a service economy. So what is service? Is it answering the phone nicely from Bangalore? Is it flipping burgers at McDonald’s? No. Service is the delivery of information that is personal and relevant to you. That could be the hotel concierge giving you directions to the best Sichuan Chinese restaurant in town, or your doctor telling you that, based on your genome and lifestyle, you should be on a specific medication. Service is personal and relevant information.
Service is information. In 2004, the Oracle Support organization studied 100 million service requests and found that over 99.9% of them had been answered with already known information. Service is information on how to maintain or optimize the performance, security and availability of the software. Of course, if I can tell you how to maintain and optimize the performance, security and availability of the software then the next logical step is to do It for you. In the software industry you know this as software-as-a-service. The company that builds the product, services the product. Salesforce, ServiceNow, Blackbaud all build enterprise software products and service them.
But back to my CEO. I asked him, do you think a customer would pay for information on how to maintain or optimize the performance, security and availability of the product? And as the builder of the product you are the aggregation point of all this information since every customer calls, texts, emails you first when they need information. What if you were to aggregate all of this information, and furthermore, what if you connected all of your machines? Wouldn’t you know more about how to best maintain, or optimize the performance, availability and security of the machine? And if he was to charge just 1% a month of the purchase price of the machine for the digital service product he’d be able to double the revenues and quadruple the margins of his company.
Any company that builds agricultural, life science, construction, healthcare, packaging, manufacturing, printing, power generation, and transportation machines has the opportunity to build a new high growth, high margin recurring revenue digital service business. Service is not break-fix. Service is personal and relevant information.
www.medium.com
One of the joys of teaching at Stanford is the quality of the students. A few years ago, I met Dr. Anthony Chang, who was coming back to school to earn a master’s degree in bioinformatics after having already earned his MBA, MD and MPH. It took him 3 1/2 years to complete, as he was still on-call as chief of pediatric cardiology at Children’s Hospital of Orange County, didn’t know how to program, and as a life-long bachelor had decided to adopt two children under the age of two.
Among his many accomplishments is starting the AIMed conference, which as the name implies focuses on AI in medicine. It’s held annually at the Ritz-Carlton Laguna Nigel in mid-December. Anthony attracts an amazing group of doctors who can both talk about pediatric endocrinology and graph databases. Since the conference is held near Christmas I often call Anthony “The Tree” and all the guest speakers are the ornaments. This year, I was asked to speak about the future of AI in medicine.
But, before we talk about the future let’s talk about the past. I was struck by one of the doctors talking about an $80M EMR application implementation. Having experience implementing enterprise ERP applications I was amazed at the number. It turns out this is not the high water mark with examples extending to north of $1B. Seriously?
Can an EMR application be the foundation for the future of AI in medicine? They are largely based on software from the 80s. If you were to think of cars it’s like trying to build an autonomous car using technology from a Model T parts bin. Furthermore, these applications were architected to serve billing applications, not patients. As a result there is no way to deliver personalized healthcare. After all, why should your bill look different than mine? And finally rather than being designed to collect and learn from exabytes of global data from healthcare machines they are built to archive notes from a set of isolated doctors who spend valuable time as typist. Maybe you should spend $10M to feed a billing application, but not $100M.
The future of AI in medicine depends on data. The more data, the more accuracy. Where is that data? Not in the EMR. It’s in the healthcare machines: the MRI, ultrasound, CT, immunoanalyzer, X-Ray, blood analysis, mass spectrometer, cytometer, and gene sequencer. Unfortunately the world of medicine lives in a disconnected state. My informal survey suggests that less than 10% of the healthcare machines in a hospital are connected. For those in computing, it looks like the 1990s when we had NetWare, Windows, Unix, and AS/400 machines that couldn’t talk to each other — until the Internet.
It turns out in 1994 when the Internet reached 1,000,000 connected machines the first generation of Internet companies like NetScape and eBay took off. And as the number of machines connected grew we ended up with even more innovations. Who could imagine NetFlix, Amazon, Google and Lyft before the Internet?
It turns out if you connected all the healthcare machines in all the children’s hospitals in the world we’d get to 500,000 machines, very close to the 1,000,000 machines that transformed the Internet. What would this enable? To begin with, we could get rid of CD-ROMs and the US Mail as the mechanism for doctors sharing data across the country. The Chexnet pneumonia digital assistant was developed with only 420 X-rays, what if they had 4,200,000 images. But, I’m sure this is just scratching the surface of what will be possible.
It’s clear the world of medicine where we pour knowledge into an individual’s head and let them, their machines and their patients operate in isolation is at an end. The challenges of connecting healthcare machines, collecting data and learning from that data are immense, but the benefit might actually change the world and it could cost a lot less than $100M.
www.medium.com