Every year, at Congress on the Future of Engineering Software (COFES), industry leaders gather to discuss — and sometimes speculate on — characteristics and attributes of the tools and technologies next-generation engineers and designers might need. This year (COFES 2014, April 2014), in the track titled “Cognition?,” Engineering News Record‘s editor Tom Sawyer asked, “With the formation of the IBM Watson Group, we are at the early stages of deployment of what can thought of as applied cognition … What are the implications and opportunities for design and engineering?” Later, a roundtable discussion continued the talk: “[IBM] Watson’s combination of natural-language capabilities and an ability to generate hypotheses should be able to address big problems in fields such as customer relations, finance, healthcare, and R&D … What will machine cognition mean to engineering and design?”
In a room just a few doors down from Sawyer’s discussion group, Microsoft’s PLM Solutions director Simon Floyd hosted a track to discuss “Microsoft’s role in machine learning, predictive analytics, advanced decision-making, and the impact on design & engineering.” Another roundtable posed the possibility of “Swarms, Autonomous Devices, and Self-Programming Machines.” The summary read, “Design theory and concepts are emerging for these autonomous systems — particularly for swarms of multiple-specialty systems, and for systems that design systems. What types of tools will we need to do this? What’s our role once they have been set in motion? How do we build in safety? What don’t we know?”
The cloud is a critical component of Microsoft’s vision for machine learning. In a recent New York Times blog post, Joseph Sirosh, vice president for machine learning at Microsoft (a research division), suggested setting up predictive analysis and machine learning could be simple “drag and drop” function, easy enough for high school students to operate. According to the blog post, “Machine learning computers examine historical data through different algorithms and programming languages to make predictions. The process is commonly used in internet search, fraud detection, product recommendations and digital personal assistants, among other things.”
Microsoft’s Floyd thinks the cloud-foundation of machine learning makes the technology particularly well-suited to “PLM, CAD, or anything else that can consume, process or act on information.” But part of the training process for machine learning is gathering historical data. He explained, “Yes, you would need to agree to your actions being logged in the same way you agree to a customer improvement program today. The software captures telemetry and uses it to help discover how an app is used.” That’s the foundation for the machine to understand that when you were adding draft angle to a part, your most probably next move is to add PMI (product & manufacturing info) for a specific surface finish.
What’s more? The products could also become much more self-aware about their life cycles and probably failures. “If a product is able to predict that it will fail, lose functionality, etc. ahead of time and gives you an opportunity to do something about it, then you’ll be pleasantly surprised because most products today do not have such capabilities,” noted Floyd. It’s a great benefit to the consumer, but for supply chains and industries that revolve around maintenance, repair, and replacement parts, it could be a cultural shock.
Sarah Mcbryan, a student at Arizona State University studying biomedical engineering, participated in the COFES roundtable discussion on cognition. She tried to strike a cautious note. We should “leave room for improvement [in design adn engineering software], but no need to implement drastic changes,” she said.
With a keen eye on prosthetic designs, Mcbryan said she would like to “improve the human-machine interface.” She has also heard about devices that can be controlled by electroencephalography (EEG), or brain wave. (Check out this Wired article on the “Mind Control” or “brain-computing” trend.) In other words, you think what you want to do, and the hardware or software would execute your command. “That would allow people who are handicapped or cannot otherwise use a computer or machines to design,” she reasoned. Some software vendors have begun exploring the idea of gesture-control, but EEG-controlled CAD software is virtually unknown — at least for the present. (For early attempts in gesture-based software control, read my previous article on the topic here.)
If she were given the chance to design her own design software, Mcbryan revealed, “Id prefer to use gestures [to control it] — more so than my voice.” (Perhaps using movements to control instruments feels more natural for Mcbryan because she happens to like swing dancing and is also a cellist.)
Floyd pointed out, with cognition and machine learning, “The idea is to use the analysis tool to drive a recommendation, decision, or action,” but that doesn’t exclude the human’s role. “Those actions are governed to avoid chaos in the same way an engineering change is not made without approval,” he added.
Mcbryan said, “I think [engineering and design software] should involve machine learning but I don’t think it should be reliant on it.”
Overhead at COFES 2014:
“[An intelligent or self-aware system like IBM Watson] might free me from some of the more mundane things, so I can think bigger thoughts,” someone reasoned.
Someone asked, “Is it a secretary for the designer?” Another suggested, “It’s a technician to the engineer.”
But how will this system, or computer, gather knowledge? Someone else brought up the unsettling truth. “[The multi-sensory living computer] needs to listen to, watch us … and be privy to what we’re working on ”
So why pay the heavy toll in the loss of privacy to spawn a cognitive system? Someone answered: “Take the top five most hazardous jobs — mining, construction, and so on — and use robots instead of people.”
IBM’s promotional video on Watson: