Accelerating What’s Ahead in Aerospace and Defense
From AI to hybrid-electric propulsion, RTX’s transformative technologies are defining our future.
The aerospace and defense industries are in the midst of rapid transformation. As air travel surges, there’s a pressing need to increase fuel efficiency and reduce operating costs. Simultaneously, the defense sector faces increasing pressure to modernize its capabilities and improve operational efficiency amid budget constraints. Add technological advances and shifting geopolitical currents to the mix, and it’s clear the sector is at a crossroads. That’s why industry leaders are seeking innovative solutions that not only enhance performance but also reduce costs.
RTX is at the forefront of these efforts, along with its three businesses: Collins Aerospace, Pratt & Whitney, and Raytheon. For example, the Pratt & Whitney GTF™ engine has enabled up to 20% improved fuel efficiency for today’s single aisle aircraft, helping operators save an estimated 2 billion gallons of fuel since entering service in 2016. Meanwhile, the company’s next-generation defense systems make it a critical player in national and global security.
The rapid clip of progress stems from collaboration and synergy between each business, as well as the company’s research centers, the RTX Technology Research Center (RTRC) and RTX BBN Technologies (BBN) which have been providing the company with valuable central research expertise for nearly a century. These entities work in tandem to try to solve some of the industry’s most complex problems and anticipate future challenges along the way.
“It’s all about unlocking the scale of this company,” says Juan de Bedout, chief technology officer at RTX. “We have 57,000 engineers in every discipline imaginable, with a rich history in many of these spaces. We can take our resources, pull them together, invest in a joint way, and drive solutions and outcomes that are much faster for our individual businesses collectively.”
New Electric Technologies
One key area of research centers on hybrid-electric propulsion, which combines traditional fuel-burning jet engines with electric motors and batteries to increase fuel efficiency and performance. While Pratt & Whitney’s engines have contributed to a significant increase in fuel efficiency since the dawn of the jet age, hybrid-electric propulsion is one of many technologies which could help continue this trend.
“We’re finding opportunities for a technology that’s evolved in one space to cross over to another context, and potentially create some disruption in a new domain,” says Scott Kaslusky, vice president of aerospace technology at RTX.
The company is spearheading hybrid-electric propulsion technology across a range of demonstrator programs. One such demonstrator combines a highly efficient Pratt & Whitney Canada thermal engine with a Collins Aerospace 1 megawatt electric motor, providing the potential to optimize performance across different phases of flight.
RTX also recently reached another key milestone with a hybrid-electric demonstrator (or prototype) known as STEP-Tech (for Scalable Turboelectric Powertrain Technology) a collaboration between Collins Aerospace, Pratt & Whitney and the RTRC that will be used to develop future electric-propulsion tools. Recent tests proved the ability of STEP-Tech’s battery system to start a thermal engine and use electrical power to charge the batteries that drive the devices providing propulsion. The modularity of the platform will allow researchers to test and refine different powertrain configurations.
“It’s really about finding efficiency gains and reducing operational costs,” Kaslusky says. “From a technology perspective, you talk about the problem while looking for a solution, and the solution while looking for a problem. Collaboration is what opens the door to bringing those two things together.”
New Materials, New Possibilities
Hybrid-electric flight is just one dimension of RTX’s technology efforts. Another is developing new classes (or “generations”) of advanced materials used in aerospace engineering.
The RTRC is particularly focused on high-entropy alloys, which are specialized metal blends that are uniquely strong, lightweight, and capable of operating at temperatures that would melt most of the metals in the periodic table. Put into practice, they can be used to craft components that make planes lighter or allow aircraft engines to operate at temperatures hundreds of degrees hotter than those available today, resulting in greater fuel efficiency.
“From a technology perspective, you talk about the problem while looking for a solution, and the solution while looking for a problem. Collaboration is what opens the door to bringing those two things together.”
Scott Kaslusky, Vice President of Aerospace Technology, RTX
The challenge with high-entropy alloys is that there are virtually infinite potential combinations of materials, says Andreas Roelofs, director of the RTRC and RTX’s vice president of research and development. “If you want to read every paper, every patent and every book about alloys in every language in the world, you’ll be busy for a couple of lifetimes.”
By using artificial intelligence and machine learning, RTX is significantly shortening the time needed for development from decades to a few short years. These models pore through findings to suggest new alloys from more than 600 billion possible combinations, enabling scientists to make more informed decisions and spend more time testing the materials with the most potential.
“You need to keep investing and moving the needle,” Roelofs says. “There’s really no better place to drive the development of new ideas that will have a real impact in aerospace and defense. That is the best gift.”
Innovation Takes Wing
Artificial intelligence and machine learning technologies are also being built into next-generation products to provide robust new capabilities while ensuring they aren’t overwhelming to operators. For example, the technology can assist service crews and help minimize downtime, or guide operators through complex decisions, even using pattern recognition to analyze and address problems. And on the engineering side, AI could be used to aid in the design of future aircraft engines and radar systems.
“Artificial intelligence tools allow us to find incredibly powerful, rich and high-performing solutions by combining the creativity of human designers with the ability of machines to handle complexity,” de Bedout says.
At the end of the day, each of these transformative technologies—hybrid-electric propulsion, high-entropy alloys, predictive AI, and scores of others—helps RTX meet the aerospace and defense industries’ critical needs not only now, but also well into the future. And as the scientists and engineers at Collins Aerospace, Pratt & Whitney and Raytheon share their expertise and combine resources, they’re able to develop solutions that much faster.
“We see an opportunity for commercial flight that’s going to be more efficient, faster and more streamlined from source to destination for every traveler,” de Bedout says. “On the defense side, we’re creating a future that helps us protect democracies around the world. Getting there is all about synergy and collaboration—that’s what’s so powerful.”
Author: Anonymous, RTX
TRIZ Applications: In this informative description of how RTX is combining its resources (Collins Aerospace, Pratt & Whitney, and Raytheon), we see a clear application of expanding solution space to define and solve problems faster and with better results. The development of hybrid airplanes using jet engines and electric motors to increase fuel efficiency and range, is a problem that has been used in the automotive industry for almost 20 years.
The use of TRIZ tools and methods with LLM’s is powerful combination to solve today’s complex problems faster and with better results.
- Details
- Category: Inside TRIZ
TRIZ: The Backbone of Innovation and Problem-Solving
Unchain your brain by using techniques that are proven by empirical data.
By Richard Langevin February 18, 2025
Image Source: adventtr / iStock / Getty Images Plus
TRIZ, or Theory of Inventive Problem Solving, is a methodology developed by Genrich Altshuller and his colleagues in the mid-20th century. This system is designed to solve engineering and technical problems creatively. The core idea is that the process of innovation is not random but follows predictable patterns using tools developed from the rigorous study of patents.
Here are some of the main components of TRIZ:
- Contradictions: Identifying and resolving contradictions within a system. Instead of making compromises, TRIZ aims to eliminate them.
- 40 Principles of Invention: A set of strategies to overcome technical challenges.
- Ideal Final Result (IFR): A vision of the best possible outcome without constraints.
- Evolution Patterns: Recognizing that systems evolve in predictable ways over time.
- Functional Analysis: Understanding and improving the functions of a system.
TRIZ is widely used in various industries to foster innovation and streamline the problem-solving process.
TRIZ can be considered as the backbone of technological innovation for several key reasons:
- Systematic Approach: Unlike traditional brainstorming, which can be somewhat random, TRIZ provides a structured method for solving problems and generating innovative ideas. This systematic approach increases the efficiency and effectiveness of the innovation process.
- Predictable Patterns of Evolution: TRIZ is based on the analysis of thousands of patents and innovations across various industries. By identifying patterns of technological evolution and problem-solving, TRIZ provides a set of tools and principles that can be applied to new problems, making the innovation process more predictable.
- Elimination of Contradictions: One of the core principles of TRIZ is to resolve contradictions without compromise. This approach often leads to breakthrough innovations, as it encourages thinking beyond conventional solutions.
- Comprehensive Toolkit: TRIZ offers a wide range of tools, including the 40 inventive principles, the contradiction matrix, and the concept of the Ideal Final Result (IFR). These tools help innovators systematically analyze and solve complex problems.
- Cross-Industry Applicability: TRIZ is not limited to any specific field or industry. Its principles can be applied to any technical or engineering problem, making it a versatile and powerful method for fostering innovation.
- Focus on Functionality: TRIZ emphasizes the importance of improving the functionality of a system. By focusing on the functions and their interactions, TRIZ helps identify areas for improvement and innovation.
- Encourages Radical Innovation: By challenging conventional thinking and encouraging the resolution of contradictions, TRIZ promotes radical, rather than incremental, innovation. This often leads to groundbreaking solutions that can transform industries.
In essence, TRIZ provides a robust framework that guides innovators through the complex process of problem-solving and idea generation, making it a cornerstone of technological innovation.
READ MORE
- How Closed Loop Quality and AI-Driven Machine Vision are Transforming Manufacturing
- Trends in Manufacturing: Advances in Technologies and Methodologies
- The Role of Lean Daily Management in Sustaining a Lean Culture
Where has TRIZ been used?
TRIZ has been successfully implemented in various industries, leading to significant improvements and impressive returns on investment (ROI). Here are a few notable examples:
- Samsung Electronics: Samsung used TRIZ to resolve the contradiction between improving battery life and reducing device weight in their Galaxy phones. By applying TRIZ principles, they developed innovative methodologies that allowed them to enhance battery performance while making the devices lighter 1. This led to increased customer satisfaction and market share.
- Boeing: Boeing applied TRIZ to improve the design and manufacturing processes of their aircraft. By using TRIZ principles, they were able to identify and eliminate contradictions in their design, resulting in more efficient and cost-effective production processes. This led to significant savings and improved aircraft performance.
- Hewlett-Packard (HP): HP used TRIZ to address various IT-related problems, including data security, CPU cycle costs, private clouds, and leased asset management. By applying TRIZ principles, HP was able to uncover novel solutions that improved system performance and reduced costs 2. This resulted in substantial ROI and enhanced competitiveness in the IT industry.
- Automotive Industry: TRIZ has been applied in the automotive industry to achieve cost reduction and innovation. By integrating TRIZ into their Design to Cost (DTC) framework, companies were able to solve contradictions and achieve significant cost improvements without compromising on quality 3. This led to increased profitability and market competitiveness.
These examples demonstrate how TRIZ can be effectively used to drive innovation and achieve substantial ROI across various industries.
Other users of TRIZ
TRIZ has been successfully applied by several companies to drive innovation and solve complex problems. Here are a few more examples:
- Siemens: Siemens has applied TRIZ to various projects, including the redesign of a streetcar driver seat armrest. This application helped simplify assembly and reduce costs 3.
- HP (Hewlett-Packard): HP has utilized TRIZ to address IT-related problems, such as data security and CPU cycle costs. The methodology has helped uncover novel and innovative solutions 4.
- Emerson Electric: Emerson Electric has leveraged TRIZ to enhance its engineering and manufacturing processes, leading to more efficient and effective solutions.
- Motorola: Motorola has used TRIZ to improve product design and development, resulting in innovative solutions and better product performance.
- NASA: NASA has applied TRIZ to solve engineering challenges in space exploration, leading to advancements in technology and mission success.
- Johnson & Johnson: Johnson & Johnson has implemented TRIZ to drive innovation in healthcare products, improving patient outcomes and streamlining manufacturing processes.
These companies have demonstrated that TRIZ can be a powerful tool for fostering creativity, improving processes, and driving innovation across various industries.
Application with other Methodologies
TRIZ complements other problem-solving methodologies by providing unique perspectives and tools that enhance the overall innovation process. Here’s how TRIZ can work in harmony with other methodologies:
- Six Sigma: TRIZ can be integrated with Six Sigma to enhance problem-solving capabilities. While Six Sigma focuses on reducing defects and improving quality through statistical analysis, TRIZ offers inventive principles and tools to address complex technical contradictions. This combination leads to more innovative and robust solutions.
- Lean Manufacturing: Lean focuses on eliminating waste and improving processes. TRIZ can complement Lean by providing inventive solutions to eliminate inefficiencies and enhance value. For example, TRIZ can help resolve contradictions in process improvement, leading to more effective waste reduction strategies.
- Design Thinking: Design Thinking emphasizes empathy and user-centric solutions. TRIZ can complement this by providing systematic tools to overcome technical challenges and contradictions identified during the ideation phase. This synergy ensures that innovative solutions are both user-friendly and technically feasible.
- Agile Methodology: Agile promotes iterative development and flexibility. TRIZ can enhance Agile by offering inventive solutions to technical problems that arise during sprints. TRIZ’s systematic approach to problem-solving can help Agile teams quickly find innovative solutions and maintain project momentum.
- Root Cause Analysis (RCA): RCA identifies underlying causes of problems. TRIZ can complement RCA by providing inventive principles to address and eliminate root causes. This combination ensures that solutions are not only effective but also innovative and sustainable.
- PDCA Cycle (Plan-Do-Check-Act): TRIZ can be integrated into the PDCA cycle to enhance continuous improvement efforts. During the planning phase, TRIZ tools can be used to identify innovative solutions. In the “Do” and “Check” phases, TRIZ can help resolve any contradictions that arise, leading to more effective implementation and evaluation.
- Brainstorming and Ideation Techniques: TRIZ can be used alongside traditional brainstorming techniques to enhance creativity. While brainstorming generates a wide range of ideas, TRIZ provides systematic tools to evaluate and refine these ideas, leading to more innovative and practical solutions.
By integrating TRIZ with these methodologies, organizations can benefit from a comprehensive and multifaceted approach to problem-solving and innovation. This synergy ensures that solutions are not only efficient and effective but also inventive and transformative.
Summary
TRIZ stands as a cornerstone of innovation due to its structured, systematic approach to problem-solving. By identifying and eliminating contradictions without compromise, TRIZ fosters radical innovation and streamlines the creative process. Its comprehensive toolkit, including the 40 inventive principles, 39 parameters, the Ideal Final Result (IFR), and evolutionary patterns, provides a robust framework that can be applied across various industries, making it universally applicable.
The methodology's success is evidenced by its implementation in leading companies such as Samsung, Boeing, HP, Intel, Siemens, Emerson Electric, Motorola, NASA, and Johnson & Johnson. These organizations have leveraged TRIZ to achieve significant improvements in product design, process efficiency, and overall innovation, leading to substantial returns on investment.
Furthermore, TRIZ complements other problem-solving methodologies like Six Sigma, Lean Manufacturing, Design Thinking, Agile, Root Cause Analysis, and the PDCA Cycle. By integrating TRIZ with these methodologies, organizations can enhance their innovation capabilities, driving both incremental and breakthrough improvements.
In essence, TRIZ provides a powerful, systematic approach to solving complex problems and expediting innovation. Its universal applicability, coupled with its ability to complement other methodologies, makes it an invaluable and disruptive asset for any organization seeking to improvement its profitability and market share with new groundbreaking products and services.
Introducing TRIZ to management, R&D departments and to your strategic planning groups is a worthwhile investment to increase your company’s future and profitability. Providing your people with the tools that they need will pay great dividends.
KEYWORDS: continuous improvement, education, innovation, manufacturing, metrology, problem solving, problems, process control, technology, TRIZ (Theory of Inventive Problem Solving), universal testers
About the author:
Richard Langevin is the principal owner and CEO of Technical Innovation Center Inc. (est.1995) www.triz.org. He is also the Executive Director and a founder of the Altshuller Institute for TRIZ Studies Inc. since 1998. www.aitriz.org
- Details
- Category: Inside TRIZ
Enhance Your Innovation
Technique Using TOP-TRIZ, the Next Generation of TRIZ
Zinovy Royzen Dec. 31, 2024
A TRIZ Master’s guide to inventing future generations of products using TOP-TRIZ technology forecasting guides.
In my consulting practice, innovation challenges that dogged teams having very good and experienced engineers for months or even years were resolved in a matter of days or hours. It was not magic; we used TOP-TRIZ. The teams’ problem was the ineffectiveness of methods for solving creative problems like brainstorming, 5-ways, fishbone diagram and other methods based on random creativity. TOP-TRIZ provides algorithms to solve six types of problems in innovation based on generalization of solutions in most ideal and innovative inventions.
The first goal of Genrich Altshuller, the creator of TRIZ (a Russian acronym that means Theory of Inventive Problem Solving) was to eliminate randomness in idea generation and develop a method guiding people invent as most experienced inventors.
Engineers generate new ideas based on their personal background; however, no one is a subject matter expert in everything. It is difficult to come up with the best solution to a challenging problem if it is outside of the background of the person attempting to solve it. But even when the best solution to a challenging problem is within the background of the problem-solver, it is still difficult to overcome preconceived notions and psychological barriers.
Alex Osborn, the creator of brainstorming, suggested making a team of people with different background work on a problem. His rules center around how the members of a team should work together. Recommendations for participants in a brainstorming session include focusing on creativity and generating as many ideas as possible. However, there is no guidance on how to be creative in generating new ideas. Additionally, there is no acknowledgment of the different types of challenging problems that arise in innovation. Brainstorming and brainstorming-based methods are thus ineffective and inefficient, delaying new product development for a long time.
In line with Francis Bacon’s suggestion that any practical science has to be developed as generalization of vast amount of specific information, Altshuller started development of the first practical scientific approach to invent with analysis of hundreds of thousands of patent disclosures, because each patent disclosure describes a problem and a proposed solution to it.
The first stage of Altshuller’s work took about 15 years, including a four-year break—during which time he was a political prisoner in one of Stalin’s Gulags! Starting in 1946, he first focused on studying inventions aimed at solving conflicts or contradictions, which are the most challenging type of problems that engineers face in innovation every day. Usually, an attempt to improve a feature or a function affects another feature or function. In most cases, engineers resort to trade-offs, sacrificing one or both conflicting functions. When optimization or compromise isn’t possible, these problems often remain unresolved for a very long time.
After analyzing about 200,000 patent disclosures, Altshuller discovered that roughly 20% of the inventions were breakthrough solutions to conflicts, achieving necessary improvements without any deterioration of the conflicting functions—without trade-offs or compromises. He noticed that all these solutions resulted from specific changes inventors made to their products or systems.
40 Inventive Principles to Help Solve Technical Contradictions
Generalizing the changes in products suggested by about 40,000 breakthrough inventions led him to identify 173 typical modifications. He grouped these modifications into a set known as the 40 Inventive Principles, or, in a literal translation, the 40 Principles to Solve Technical Contradictions. In addition, Altshuller developed a two-dimensional Contradiction Matrix, which includes 39 of the most common parameters or features that require improvement or are likely to deteriorate.
To match a conflict from a real-life problem with one in the Matrix, a user selects the parameter that needs improvement and the parameter that would be deteriorated by the improvement. In this intersection, the Matrix will offer a cell with up to four Inventive Principles that are most used to solve similar contradictions. Although the Matrix provides solutions for about 1,200 of the most common contradictions in hardware engineering, it is often challenging to match a real-life contradiction with one in the Matrix, as the 39 parameters may not be sufficient to cover all possible scenarios.
The 40 Principles and the Matrix have been described in many publications. Due to the simplicity of the 1971 method, it has become popular among some consultants, leading to the misconception that the Inventive Principles are synonymous with TRIZ or, at the very least, the foundation of modern TRIZ. In reality, the 40 Principles were just the starting point in the development of TRIZ.
READ MORE: Inventive Principles to Solving
In recent years, there have been attempts to increase the number of parameters in the Matrix and add more Principles. However, these efforts failed to address the underlying reasons why Altshuller completely abandoned the Inventive Principles and the Matrix, replacing them with Classical TRIZ, which he had been developing until 1985.
By the early 1970s, Altshuller realized that his goal of eliminating randomness in solving contradictions had not been achieved using the Inventive Principles and the Matrix. He also understood the reasons behind this limitation.
First, Altshuller realized that most problems could not be solved in a single step from problem to final solution. As he wrote in his book, And Suddenly the Inventor Appeared, most problems require a combination of two or more Principles. He explained, “Ten thousand two-method combinations can be produced out of one hundred methods! You can imagine the number of solutions we can get if we use a combination of three, four, or five methods. So, let us stop solving problems by sorting out different solutions or using the trial-and-error method.”
Second, the Matrix helps identify a conflict between two functions—when one function is improved, the conflicting function is affected. However, Altshuller discovered that a conflict between two functions is merely a surface-level understanding of the problem. In any conflict, there is a single parameter controlling both of the conflicting functions. This parameter needs to have one value for the best performance of one function and a different value for the best performance of the conflicting function.
The need to establish two different values for the same parameter is what Altshuller called a physical contradiction, which is the root cause of the conflict. He questioned why we should try to solve a conflict without understanding its root cause. To address this, Altshuller began developing an algorithm to resolve conflicts, which would lead a user from understanding the conflict between two functions to recognizing its underlying physical contradiction. There are just a few rules to separate physical contradictions.
Without knowledge of TRIZ, engineers are trained to optimize the functions of a conflict or adjust the values of the parameter controlling those functions. Trade-offs, however, are not the best possible solutions. The ideal or breakthrough solution should allow for the improvement of one function without any deterioration of the other. Yet, in many cases, a trade-off is not possible, leaving problems unresolved for months, years, or even decades.
Third, how can we match the vast number of parameters possible in real-world problems with just the 39 parameters of the Matrix? Attempts to increase the number of parameters to 50 or 100 still fall short of capturing all potential parameters. Altshuller solved this problem by inventing a way to describe conflicts using symbols. With this approach, any conflict can be described without relying on specific parameters at all.
Fourth, each Matrix cell offers only up to four principles, but not all the principles that could be used to solve a respective contradiction. Inventive Principles were the first and very primitive method for solving contradictions without addressing their root causes. Altshuller even omitted mention of the Inventive Principles and the Matrix in his book To Catch an Idea, his last book on problem-solving.
Published in 1985, it described his final version of TRIZ, now known as Classical TRIZ. The first description of Classical TRIZ in English was presented in my paper “Application of TRIZ in Value Management and Quality Improvement” at the Society of American Value Engineers International Conference in 1993 and published in the 1993 SAVE Proceedings, pages 94-101.
Five Types of Innovation Problems
Classical TRIZ, as developed by 1985, addresses five types of problems in innovation, with conflicts being only one of these types. Altshuller regarded ARIZ (the Russian acronym for Algorithm for Inventive Problem Solving) as his next-generation conflict-solving method. He continuously worked on developing and refining ARIZ until 1985, when his health deteriorated, forcing him to halt further work on Classical TRIZ. Up until then, he had been publishing new versions of ARIZ almost every year.
ARIZ-85 is an algorithm guided step-by-step from a conflict between two functions to its deepest problem domain called physical contradiction, where a parameter controlling both conflicting functions has to have two values. This method proved so powerful that just a few rules of Physical Contradiction Separation replaced the entire set of Inventive Principles.
Until 1971, Altshuller considered a problem “inventive” only if it involved a contradiction. Later, however, he identified four additional types of inventive problems beyond conflicts. These are: 1) the need to eliminate detrimental or harmful functions; 2) the need to introduce new functions or improve existing ones; 3) the need to introduce or enhance detection or measurement capabilities; and 4) the need to invent next-generation products, thus addressing a wider array of innovation challenges.
Altshuller introduced Substance-Field Models (or Su-Field models) to represent conflicts and other types of inventive problems using symbolic notation. This approach was as revolutionary in innovation as the use of symbols in mathematics, which led to the development of algebra, or the introduction of symbolic notation in chemistry. By abstracting complex technical problems into symbolic models, Su-Field models allowed systematic analysis and problem-solving, providing a powerful tool within TRIZ for addressing a broad spectrum of engineering and design challenges.
Developing Standard Solutions Based on Patent Disclosures
Using symbols to describe problems and solutions to them, he developed a set of 76 Standard Solutions based on his analysis and generalization of ideal solutions found in the world-wide patent disclosures of the most innovative inventions.
By employing symbols to describe problems and their solutions, Altshuller developed a set of 76 Standard Solutions. These were based on his analysis and generalization of ideal solutions found in patent disclosures of groundbreaking inventions worldwide. The Standard Solutions provided a structured framework within TRIZ, enabling inventors and engineers to systematically approach and solve complex problems by leveraging proven patterns from highly innovative solutions.
Altshuller discovered that all products and technologies tend to follow similar developmental steps over time. Based on this insight, he proposed using these steps as trends, known in literal translation as the Laws of Engineering System Evolution (LESE), to guide the invention of next-generation products. LESE outlines predictable patterns in technological evolution, offering a roadmap for anticipating and shaping future innovations in a systematic way.
With his Classical TRIZ, Altshuller was much closer to his goal to eliminate randomness in idea generation. However, as with any original step in science, there was a lot of room for further development. This became clear to me from the very beginning of my extensive professional TRIZ training and consulting.
From January 1988 until my departure to the U.S. alone, about 3,000 engineers attended our Kishinev’s month-long training using a program approved by Altshuller. Out of 192 hours of training, just two hours were reserved for Inventive Principles, mostly with the purpose of explaining the history of TRIZ. Guiding participants to solve their real-life problems and answering their questions helped to realize that Classical TRIZ should be developed further to make TRIZ more powerful while easy to learn and apply.
With Classical TRIZ, Altshuller moved closer to his goal of eliminating randomness in idea generation. However, as with any pioneering scientific framework, there was ample room for further development. This became clear to me from the start of my extensive professional TRIZ training and consulting. From January 1988 until I left for the U.S., around 3,000 engineers attended our month-long training program in Kishinev, which was approved by Altshuller.
Of the 192 hours of training, only two hours were dedicated to the Inventive Principles, mainly to cover the history of TRIZ. Working closely with participants on real-life problems and addressing their questions made me realize the potential for further developing Classical TRIZ. This would make TRIZ more powerful, accessible and easier to apply.
Classical TRIZ lacked a structured approach for problem formulation. While users learned to model problems, they often struggled to identify which specific problem to model. The various TRIZ methods were not well integrated, leading to confusion as multiple techniques were available for addressing similar classes of problems, without clear guidance on which method to apply. Additionally, the final version of ARIZ—with its nine parts and 40 rules—was too complex for easy learning and application.
Beginners frequently made errors in Step 1, rendering the remainder of the process ineffective. Efforts to create a flowchart just for using the 76 Standard Solutions resulted in extremely large, complex diagrams spanning several pages in some TRIZ books, which further underscored the need for simplification and integration.
Refining a TRIZ Method for American Engineers
TOP-TRIZ is the next generation of TRIZ, developed through my three-and-a-half decades of practical experience and continuous refinement of TRIZ methods. In early 1992, I founded TRIZ Consulting, Inc., the first company in the U.S. dedicated to applying TRIZ, introducing the methodology to industry leaders like Boeing, Hewlett-Packard, and Samsung. Facilitating teams on complex challenges and training thousands of engineers (including more than 2,000 at Boeing alone) helped me identify essential improvements to TRIZ methods—refinements that became integral to TOP-TRIZ.
As a result of years of continuous improvement of Classical TRIZ, it gradually evolved into TOP-TRIZ, the most powerful yet user-friendly iteration for solving complex problems in product innovation and developing future generations of products. This transformation was achieved through the development of advanced problem formulation techniques, including my Tool-Object-Product (TOP) Function Modeling. TOP-TRIZ uses a step-by-step process to uncover a comprehensive range of problems worth solving.
Moreover, TRIZ methods were advanced and integrated into a single, cohesive system, complete with algorithms for creating ideal breakthrough solutions across the six classes of innovation problems. By organizing techniques that address the same class of problems together, TOP-TRIZ not only helps in solving problems but also guides users in formulating, classifying and solving subsequent challenges while maximizing resource use and minimizing costs.
TOP-TRIZ has proven invaluable for solving difficult problems, improving quality, reliability, productivity and reducing costs. It has enabled customers to develop next-generation products, secure new orders, increase market share, save hundreds of millions of dollars, enhance product reliability, develop new products and protect innovations through new patents.
Zinovy Royzen, TRIZ Consulting
The TOP-TRIZ Flow Chart, the simplest and the most user-friendly TRIZ flow chart, according to the author, Zinovy Royzen.
Useful Versus Harmful Function Analysis
TOP-TRIZ Problem Formulation is a universal set of guiding steps to analyze a challenge and formulate an exhaustive set of problems where every problem is a single function or two functions if they are in a conflict with each other. An essential part of problem formulation is my Tool-Object-Product (TOP) Function Analysis, next generation of Function Analysis.
According to TOP Function Analysis, a complete function model consists of four elements:
- the Tool (the action provider);
- an Action applied to the Object (recipient of the action);
- the Object (recipient of the action); and
- the Product (result of the action).
A function is useful if its result is needed, and harmful if its result is unwanted. A useful function can be insufficient if the result is less than needed. A useful function can be needed but absent if there is no action taken. A harmful function is unknown if the action is unknown. There are seven steps for the complete analysis of a useful function. There are six steps for the complete analysis of a harmful function including identification of its root cause and consequences. Even a complex challenge can be described by the involved functions.
This approach provides a structured, comprehensive method for problem formulation and analysis, ensuring no aspect of the challenge is overlooked.
READ MORE: Triz is Now Practiced in 50 Countries
Introducing the product of a function into the function model completes the analysis and creates a clear, understandable link between functions. For example, the product of one function can serve as the tool or object in another function. This interlinking of functions is crucial for understanding the dependencies between them, particularly in the performance of a system. It is also key in analyzing a chain of harmful functions, helping to uncover the root causes of a failure.
By tracing this chain or tree of harmful functions—from the failure itself to its root cause and eventual consequences—users can more effectively diagnose and address issues within a system. This approach ensures a comprehensive view of how functions interact, leading to more precise and effective solutions.
An exhaustive set of problems includes:
- Current problems.
- Disadvantages of the known solutions to the current problems.
- Problems revealed by function analysis.
- Problems formulated by analysis of the history of the current problems.
- Problems formulated by challenging constraints.
- Problem formulated by analysis of the alternative system.
- Problems formulated by applying Ideal Ways, an algorithm guiding four different ways to eliminate any component or even its feature associated with any disadvantage, such as cost, complexity, difficulty to make, low reliability, not easy to use or affecting something else.
TOP-TRIZ Problem Solving
TOP-TRIZ classifies formulated problems into the following six classes and provides algorithms to develop exhaustive set of ideal solutions to each of them. The classes of problems are an unknown harmful function, a need to introduce or improve detection or measurement, a conflict, a harmful function, an absent or insufficient function and a need to invent next product generation.
TOP-TRIZ classifies formulated problems into six distinct classes: an unknown harmful function, a need to introduce or improve detection or measurement, a conflict, a harmful function, an absent or insufficient function, and a need to invent a product’s next generation. For each of these classes, TOP-TRIZ provides algorithms to develop a comprehensive set of ideal solutions, ensuring that all possible avenues for innovation are explored and addressed systematically.
A problem is classified as an Absent or Insufficient Action if there is a need to introduce a new function or find a better way to perform an existing function. Solving this class of problems is guided by a method called Build a Sufficient Function. The key to this method is a guide to introduce a missing action and identify one or more possible sources of the action among available resources. A list of most used fields (the physical nature of actions) helps to overcome preconceived notions and background barriers (psychological inertia) in the selection of possible nature of the action.
A problem is a Conflict if an attempt to eliminate a harmful function deteriorates or even disables a useful function. The TOP model of a conflict consists of these two functions. TOP-TRIZ Conflict Solving Algorithm is the next generation of Altshuller’s ARIZ, a step-by-step guide to developing breakthrough solutions to a conflict while eliminating the harmful function completely without any deterioration of the useful function and even sometimes with the improvement of it as well. It consists of six steps.
The first four steps—each one or two simple sentences—lead to identifying three physical contradictions behind the conflict. The fifth step applies all six ways to separate physical contradictions. Most conflicts could be solved by using just Step 1, resulting in Physical Contradiction 1, and Step 5 for its separation. Step 6 guides the TOP-TRIZ user to state subsequent problems, if any.
TOP-TRIZ classifies formulated problems into six classes and provides algorithms to develop an exhaustive set of ideal solutions to each of them.
A problem is classified as a Harmful Action if it results in a harmful or not needed product. Harmful Action Elimination methods guide a TOP-TRIZ user to eliminate a harmful function and, if possible, leads to turning the harmful function into a useful function.
A problem is classified as an Unknown Harmful Function if its action is unknown. The method for revealing the root causes of a failure is based on inventing ways to recreate the failure. Here the TOP-TRIZ user turns the problem on its head, pretending that the harmful product of a failure is a desired product. This allows the user to apply all of the power of TOP-TRIZ to invent potential mechanisms of the failure.
A problem is classified as a Detection or Measurement if there is a need to introduce a detection or measurement or improve existing ones. Since in most cases detection or measurement is needed to control an important process, the best approach is to eliminate the need for detection or measurement. If it is not possible, the method guides to build the most effective detection or measurement system.
A problem is classified as Technology Forecasting if there is a need to invent next product generation, formulate new problems for further improvement of an existing system, maximize utilization of new solutions, develop a road map of innovation and marketing strategy, or develop concepts for a patent umbrella.
In contrast with conventional road mapping of innovation of products based on approximation of trends of some of their parameters without understanding how these desired parameter changes could be achieved, TOP-TRIZ technology forecasting guides inventing the future generations of products.
READ MORE: TRIZ Plus – A Modern Tool for Enhancing Design Innovation
TOP-TRIZ technology forecasting is based on the following trends of evolution of many products from their birth to their decline. Statistically, there is a very high probability that your product will have the same steps in its evolution.
- Increase the degree of Ideality of your system
- Determine the stage of the evolution of your system
- Replace a limited system
- Identify and solve contradictions behind the limits of your system
- Develop the structure of your system
- Simplify your system
- Increase the degree of dynamism
- Change states of stability
- Match properties
- Enhance useful actions
- Alter the zone and/or the duration of useful actions
- Remove a human as a source of energy and/or control
Solving Subsequent Problems
It is rare for a challenging problem to be solved in just one step. Usually, a new concept introduces a new problem. Most often, a solution to the initial problem leads to the deterioration of another aspect, creating a conflict, or reveals the need to modify an available resource (an absent action), or both. In such cases, a subsequent problem is not a reason to reject an idea. TOP-TRIZ guides users in identifying, classifying and solving these subsequent problems.
There is another type of subsequent problem to consider. No matter how good your new concept is, there are always next steps according to technology forecasting. The question is, why not reveal these steps right away? Many engineers view technology forecasting as a tool for road mapping innovation, and as a result, they don’t apply it while solving a single problem. This oversight leads to missed opportunities to enhance their best concepts further. TOP-TRIZ encourages the proactive use of technology forecasting to reveal these next steps, helping to refine and improve the solution continuously.
TOP-TRIZ Solution Process
Zinovy Royzen, TRIZ Consulting
The TOP-TRIZ Solution Tree is a guide for review and course correction of any step or misstep along the problem-solving algorithm.
TOP-TRIZ allows users to document every step of the solution process and guides them in constructing a Solution Tree for each problem. This approach enables the review and correction of any step, if needed. The documented steps can then be used as reference samples for solving future challenges, ensuring consistency and efficiency in the problem-solving process.
TOP-TRIZ guides you in your project by helping you formulate an exhaustive set of problems worth solving and an exhaustive set of the best solutions to them. This process maximizes the use of available resources, enabling the development of better products at a lower cost.
Enhancing TOP-TRIZ Problem Solving with AI
Sometimes, the best concepts developed in TOP-TRIZ problem-solving process lead to the need for knowledge outside of the participants’ domains of expertise of a project area, since no one is an expert in everything. Here, AI is used as a universal subject matter expert, producing impressive outcomes and significantly cutting down the problem-solving duration. However, one current limitation of AI tools is they are typically limited to solving those problems that require only one single step change to arrive at the best solution.
Yet most problems can’t be solved in one step. Here, the TOP-TRIZ process provides guidance to frame questions that pertain to individual steps.
Also, our efforts in training AI with TOP-TRIZ have yielded astonishing results. Some issues that dogged our clients for years and took hours to resolve using TOP-TRIZ were solved in a matter of minutes when approached with AI trained on TOP-TRIZ.
Innovator’s Engine, TOP-TRIZ Problem Formulator and Solver Software
The software guides users through all steps of TOP-TRIZ Problem Formulation, solving conflicts, building sufficient functions, harmful action elimination and revealing the causes of a failure. The software states subsequent problems and offers corresponding methods to solve them. Having the same steps and terminology as the TOP-TRIZ training materials, it also helps its users to learn TOP-TRIZ. The software is very effective for facilitation of teams even if they’re not familiar with TOP-TRIZ.
TOP-TRIZ methodology guides you through your project by helping you formulate an exhaustive set of problems related to your system, the current issue and your objectives. It then leads you to develop a comprehensive set of the best solutions. These solutions serve as the foundation for selecting the most ideal solutions for immediate, near-term and long-term implementation. By having an exhaustive set of commercially applicable solutions, you create a strong basis for reliable patent protection, securing your business's innovation for years to come.
TOP-TRIZ methodology offers significant advantages for systematic innovation, helping to develop better products and processes at a lower cost and in less time, all while remaining user-friendly. It enables faster generation of elegant and valuable solutions to even the most challenging design and manufacturing problems, accelerating projects and saving substantial man-hours and money.
References
- Altshuller, Genrich, And Suddenly the Inventor Appeared: TRIZ, the Theory of Inventive Problem Solving. Translated by Lev Shulyak. Worchester, Massachusetts, Technical Innovation
file:///C:/Users/Owner/AppData/Local/Temp/msohtmlclip1/01/clip_image001.png" width="243" height="47" />Enhance Your Innovation
Technique Using TOP-TRIZ, the Next Generation of TRIZ
Zinovy Royzen Dec. 31, 2024
A TRIZ Master’s guide to inventing future generations of products using TOP-TRIZ technology forecasting guides.
In my consulting practice, innovation challenges that dogged teams having very good and experienced engineers for months or even years were resolved in a matter of days or hours. It was not magic; we used TOP-TRIZ. The teams’ problem was the ineffectiveness of methods for solving creative problems like brainstorming, 5-ways, fishbone diagram and other methods based on random creativity. TOP-TRIZ provides algorithms to solve six types of problems in innovation based on generalization of solutions in most ideal and innovative inventions.
The first goal of Genrich Altshuller, the creator of TRIZ (a Russian acronym that means Theory of Inventive Problem Solving) was to eliminate randomness in idea generation and develop a method guiding people invent as most experienced inventors.
Engineers generate new ideas based on their personal background; however, no one is a subject matter expert in everything. It is difficult to come up with the best solution to a challenging problem if it is outside of the background of the person attempting to solve it. But even when the best solution to a challenging problem is within the background of the problem-solver, it is still difficult to overcome preconceived notions and psychological barriers.
Alex Osborn, the creator of brainstorming, suggested making a team of people with different background work on a problem. His rules center around how the members of a team should work together. Recommendations for participants in a brainstorming session include focusing on creativity and generating as many ideas as possible. However, there is no guidance on how to be creative in generating new ideas. Additionally, there is no acknowledgment of the different types of challenging problems that arise in innovation. Brainstorming and brainstorming-based methods are thus ineffective and inefficient, delaying new product development for a long time.
In line with Francis Bacon’s suggestion that any practical science has to be developed as generalization of vast amount of specific information, Altshuller started development of the first practical scientific approach to invent with analysis of hundreds of thousands of patent disclosures, because each patent disclosure describes a problem and a proposed solution to it.
The first stage of Altshuller’s work took about 15 years, including a four-year break—during which time he was a political prisoner in one of Stalin’s Gulags! Starting in 1946, he first focused on studying inventions aimed at solving conflicts or contradictions, which are the most challenging type of problems that engineers face in innovation every day. Usually, an attempt to improve a feature or a function affects another feature or function. In most cases, engineers resort to trade-offs, sacrificing one or both conflicting functions. When optimization or compromise isn’t possible, these problems often remain unresolved for a very long time.
After analyzing about 200,000 patent disclosures, Altshuller discovered that roughly 20% of the inventions were breakthrough solutions to conflicts, achieving necessary improvements without any deterioration of the conflicting functions—without trade-offs or compromises. He noticed that all these solutions resulted from specific changes inventors made to their products or systems.
40 Inventive Principles to Help Solve Technical Contradictions
Generalizing the changes in products suggested by about 40,000 breakthrough inventions led him to identify 173 typical modifications. He grouped these modifications into a set known as the 40 Inventive Principles, or, in a literal translation, the 40 Principles to Solve Technical Contradictions. In addition, Altshuller developed a two-dimensional Contradiction Matrix, which includes 39 of the most common parameters or features that require improvement or are likely to deteriorate.
To match a conflict from a real-life problem with one in the Matrix, a user selects the parameter that needs improvement and the parameter that would be deteriorated by the improvement. In this intersection, the Matrix will offer a cell with up to four Inventive Principles that are most used to solve similar contradictions. Although the Matrix provides solutions for about 1,200 of the most common contradictions in hardware engineering, it is often challenging to match a real-life contradiction with one in the Matrix, as the 39 parameters may not be sufficient to cover all possible scenarios.
The 40 Principles and the Matrix have been described in many publications. Due to the simplicity of the 1971 method, it has become popular among some consultants, leading to the misconception that the Inventive Principles are synonymous with TRIZ or, at the very least, the foundation of modern TRIZ. In reality, the 40 Principles were just the starting point in the development of TRIZ.
READ MORE: Inventive Principles to Solving
In recent years, there have been attempts to increase the number of parameters in the Matrix and add more Principles. However, these efforts failed to address the underlying reasons why Altshuller completely abandoned the Inventive Principles and the Matrix, replacing them with Classical TRIZ, which he had been developing until 1985.
By the early 1970s, Altshuller realized that his goal of eliminating randomness in solving contradictions had not been achieved using the Inventive Principles and the Matrix. He also understood the reasons behind this limitation.
First, Altshuller realized that most problems could not be solved in a single step from problem to final solution. As he wrote in his book, And Suddenly the Inventor Appeared, most problems require a combination of two or more Principles. He explained, “Ten thousand two-method combinations can be produced out of one hundred methods! You can imagine the number of solutions we can get if we use a combination of three, four, or five methods. So, let us stop solving problems by sorting out different solutions or using the trial-and-error method.”
Second, the Matrix helps identify a conflict between two functions—when one function is improved, the conflicting function is affected. However, Altshuller discovered that a conflict between two functions is merely a surface-level understanding of the problem. In any conflict, there is a single parameter controlling both of the conflicting functions. This parameter needs to have one value for the best performance of one function and a different value for the best performance of the conflicting function.
The need to establish two different values for the same parameter is what Altshuller called a physical contradiction, which is the root cause of the conflict. He questioned why we should try to solve a conflict without understanding its root cause. To address this, Altshuller began developing an algorithm to resolve conflicts, which would lead a user from understanding the conflict between two functions to recognizing its underlying physical contradiction. There are just a few rules to separate physical contradictions.
Without knowledge of TRIZ, engineers are trained to optimize the functions of a conflict or adjust the values of the parameter controlling those functions. Trade-offs, however, are not the best possible solutions. The ideal or breakthrough solution should allow for the improvement of one function without any deterioration of the other. Yet, in many cases, a trade-off is not possible, leaving problems unresolved for months, years, or even decades.
Third, how can we match the vast number of parameters possible in real-world problems with just the 39 parameters of the Matrix? Attempts to increase the number of parameters to 50 or 100 still fall short of capturing all potential parameters. Altshuller solved this problem by inventing a way to describe conflicts using symbols. With this approach, any conflict can be described without relying on specific parameters at all.
Fourth, each Matrix cell offers only up to four principles, but not all the principles that could be used to solve a respective contradiction. Inventive Principles were the first and very primitive method for solving contradictions without addressing their root causes. Altshuller even omitted mention of the Inventive Principles and the Matrix in his book To Catch an Idea, his last book on problem-solving.
Published in 1985, it described his final version of TRIZ, now known as Classical TRIZ. The first description of Classical TRIZ in English was presented in my paper “Application of TRIZ in Value Management and Quality Improvement” at the Society of American Value Engineers International Conference in 1993 and published in the 1993 SAVE Proceedings, pages 94-101.
Five Types of Innovation Problems
Classical TRIZ, as developed by 1985, addresses five types of problems in innovation, with conflicts being only one of these types. Altshuller regarded ARIZ (the Russian acronym for Algorithm for Inventive Problem Solving) as his next-generation conflict-solving method. He continuously worked on developing and refining ARIZ until 1985, when his health deteriorated, forcing him to halt further work on Classical TRIZ. Up until then, he had been publishing new versions of ARIZ almost every year.
ARIZ-85 is an algorithm guided step-by-step from a conflict between two functions to its deepest problem domain called physical contradiction, where a parameter controlling both conflicting functions has to have two values. This method proved so powerful that just a few rules of Physical Contradiction Separation replaced the entire set of Inventive Principles.
Until 1971, Altshuller considered a problem “inventive” only if it involved a contradiction. Later, however, he identified four additional types of inventive problems beyond conflicts. These are: 1) the need to eliminate detrimental or harmful functions; 2) the need to introduce new functions or improve existing ones; 3) the need to introduce or enhance detection or measurement capabilities; and 4) the need to invent next-generation products, thus addressing a wider array of innovation challenges.
Altshuller introduced Substance-Field Models (or Su-Field models) to represent conflicts and other types of inventive problems using symbolic notation. This approach was as revolutionary in innovation as the use of symbols in mathematics, which led to the development of algebra, or the introduction of symbolic notation in chemistry. By abstracting complex technical problems into symbolic models, Su-Field models allowed systematic analysis and problem-solving, providing a powerful tool within TRIZ for addressing a broad spectrum of engineering and design challenges.
Developing Standard Solutions Based on Patent Disclosures
Using symbols to describe problems and solutions to them, he developed a set of 76 Standard Solutions based on his analysis and generalization of ideal solutions found in the world-wide patent disclosures of the most innovative inventions.
By employing symbols to describe problems and their solutions, Altshuller developed a set of 76 Standard Solutions. These were based on his analysis and generalization of ideal solutions found in patent disclosures of groundbreaking inventions worldwide. The Standard Solutions provided a structured framework within TRIZ, enabling inventors and engineers to systematically approach and solve complex problems by leveraging proven patterns from highly innovative solutions.
Altshuller discovered that all products and technologies tend to follow similar developmental steps over time. Based on this insight, he proposed using these steps as trends, known in literal translation as the Laws of Engineering System Evolution (LESE), to guide the invention of next-generation products. LESE outlines predictable patterns in technological evolution, offering a roadmap for anticipating and shaping future innovations in a systematic way.
With his Classical TRIZ, Altshuller was much closer to his goal to eliminate randomness in idea generation. However, as with any original step in science, there was a lot of room for further development. This became clear to me from the very beginning of my extensive professional TRIZ training and consulting.
From January 1988 until my departure to the U.S. alone, about 3,000 engineers attended our Kishinev’s month-long training using a program approved by Altshuller. Out of 192 hours of training, just two hours were reserved for Inventive Principles, mostly with the purpose of explaining the history of TRIZ. Guiding participants to solve their real-life problems and answering their questions helped to realize that Classical TRIZ should be developed further to make TRIZ more powerful while easy to learn and apply.
With Classical TRIZ, Altshuller moved closer to his goal of eliminating randomness in idea generation. However, as with any pioneering scientific framework, there was ample room for further development. This became clear to me from the start of my extensive professional TRIZ training and consulting. From January 1988 until I left for the U.S., around 3,000 engineers attended our month-long training program in Kishinev, which was approved by Altshuller.
Of the 192 hours of training, only two hours were dedicated to the Inventive Principles, mainly to cover the history of TRIZ. Working closely with participants on real-life problems and addressing their questions made me realize the potential for further developing Classical TRIZ. This would make TRIZ more powerful, accessible and easier to apply.
Classical TRIZ lacked a structured approach for problem formulation. While users learned to model problems, they often struggled to identify which specific problem to model. The various TRIZ methods were not well integrated, leading to confusion as multiple techniques were available for addressing similar classes of problems, without clear guidance on which method to apply. Additionally, the final version of ARIZ—with its nine parts and 40 rules—was too complex for easy learning and application.
Beginners frequently made errors in Step 1, rendering the remainder of the process ineffective. Efforts to create a flowchart just for using the 76 Standard Solutions resulted in extremely large, complex diagrams spanning several pages in some TRIZ books, which further underscored the need for simplification and integration.
Refining a TRIZ Method for American Engineers
TOP-TRIZ is the next generation of TRIZ, developed through my three-and-a-half decades of practical experience and continuous refinement of TRIZ methods. In early 1992, I founded TRIZ Consulting, Inc., the first company in the U.S. dedicated to applying TRIZ, introducing the methodology to industry leaders like Boeing, Hewlett-Packard, and Samsung. Facilitating teams on complex challenges and training thousands of engineers (including more than 2,000 at Boeing alone) helped me identify essential improvements to TRIZ methods—refinements that became integral to TOP-TRIZ.
As a result of years of continuous improvement of Classical TRIZ, it gradually evolved into TOP-TRIZ, the most powerful yet user-friendly iteration for solving complex problems in product innovation and developing future generations of products. This transformation was achieved through the development of advanced problem formulation techniques, including my Tool-Object-Product (TOP) Function Modeling. TOP-TRIZ uses a step-by-step process to uncover a comprehensive range of problems worth solving.
Moreover, TRIZ methods were advanced and integrated into a single, cohesive system, complete with algorithms for creating ideal breakthrough solutions across the six classes of innovation problems. By organizing techniques that address the same class of problems together, TOP-TRIZ not only helps in solving problems but also guides users in formulating, classifying, and solving subsequent challenges while maximizing resource use and minimizing costs.
TOP-TRIZ has proven invaluable for solving difficult problems, improving quality, reliability, productivity, and reducing costs. It has enabled customers to develop next-generation products, secure new orders, increase market share, save hundreds of millions of dollars, enhance product reliability, develop new products, and protect innovations through new patents.
Zinovy Royzen, TRIZ Consulting
The TOP-TRIZ Flow Chart is the simplest and most user-friendly TRIZ flow chart according to Royzen..
Useful Versus Harmful Function Analysis
-
TOP-TRIZ Problem Formulation is a universal set of guiding steps to analyze a challenge and formulate an exhaustive set of problems where every problem is a single function or two functions if they are in a conflict with each other. An essential part of problem formulation is my Tool-Object-Product (TOP) Function Analysis, next generation of Function Analysis.
According to TOP Function Analysis, a complete function model consists of four elements:
- the Tool (the action provider);
- an Action applied to the Object (recipient of the action);
- the Object (recipient of the action); and
- the Product (result of the action).
A function is useful if its result is needed, and harmful if its result is unwanted. A useful function can be insufficient if the result is less than needed. A useful function can be needed but absent if there is no action taken. A harmful function is unknown if the action is unknown. There are seven steps for the complete analysis of a useful function. There are six steps for the complete analysis of a harmful function including identification of its root cause and consequences. Even a complex challenge can be described by the involved functions.
This approach provides a structured, comprehensive method for problem formulation and analysis, ensuring no aspect of the challenge is overlooked.
READ MORE: Triz is Now Practiced in 50 Countries
Introducing the product of a function into the function model completes the analysis and creates a clear, understandable link between functions. For example, the product of one function can serve as the tool or object in another function. This interlinking of functions is crucial for understanding the dependencies between them, particularly in the performance of a system. It is also key in analyzing a chain of harmful functions, helping to uncover the root causes of a failure.
By tracing this chain or tree of harmful functions—from the failure itself to its root cause and eventual consequences—users can more effectively diagnose and address issues within a system. This approach ensures a comprehensive view of how functions interact, leading to more precise and effective solutions.
An exhaustive set of problems includes:
- Current problems.
- Disadvantages of the known solutions to the current problems.
- Problems revealed by function analysis.
- Problems formulated by analysis of the history of the current problems.
- Problems formulated by challenging constraints.
- Problem formulated by analysis of the alternative system.
- Problems formulated by applying Ideal Ways, an algorithm guiding four different ways to eliminate any component or even its feature associated with any disadvantage, such as cost, complexity, difficulty to make, low reliability, not easy to use or affecting something else.
TOP-TRIZ Problem Solving
TOP-TRIZ classifies formulated problems into the following six classes and provides algorithms to develop exhaustive set of ideal solutions to each of them. The classes of problems are an unknown harmful function, a need to introduce or improve detection or measurement, a conflict, a harmful function, an absent or insufficient function and a need to invent next product generation.
TOP-TRIZ classifies formulated problems into six distinct classes: an unknown harmful function, a need to introduce or improve detection or measurement, a conflict, a harmful function, an absent or insufficient function, and a need to invent a product’s next generation. For each of these classes, TOP-TRIZ provides algorithms to develop a comprehensive set of ideal solutions, ensuring that all possible avenues for innovation are explored and addressed systematically.
A problem is classified as an Absent or Insufficient Action if there is a need to introduce a new function or find a better way to perform an existing function. Solving this class of problems is guided by a method called Build a Sufficient Function. The key to this method is a guide to introduce a missing action and identify one or more possible sources of the action among available resources. A list of most used fields (the physical nature of actions) helps to overcome preconceived notions and background barriers (psychological inertia) in the selection of possible nature of the action.
A problem is a Conflict if an attempt to eliminate a harmful function deteriorates or even disables a useful function. The TOP model of a conflict consists of these two functions. TOP-TRIZ Conflict Solving Algorithm is the next generation of Altshuller’s ARIZ, a step-by-step guide to developing breakthrough solutions to a conflict while eliminating the harmful function completely without any deterioration of the useful function and even sometimes with the improvement of it as well. It consists of six steps.
The first four steps—each one or two simple sentences—lead to identifying three physical contradictions behind the conflict. The fifth step applies all six ways to separate physical contradictions. Most conflicts could be solved by using just Step 1, resulting in Physical Contradiction 1, and Step 5 for its separation. Step 6 guides the TOP-TRIZ user to state subsequent problems, if any.
The Conflict Solving Algorithm Flow Chart
Zinovy Royzen, TRIZ Consulting
-
TOP-TRIZ classifies formulated problems into six classes and provides algorithms to develop an exhaustive set of ideal solutions for each of them.
A problem is classified as a Harmful Action if it results in a harmful or unnecessary product. Harmful Action Elimination methods guide a TOP-TRIZ user to eliminate a harmful function and, if possible, leads to turning the harmful function into a useful function.
A problem is classified as an Unknown Harmful Function if its action is unknown. The method for revealing the root causes of a failure is based on inventing ways to recreate the failure. Here the TOP-TRIZ user turns the problem on its head, pretending that the harmful product of a failure is a desired product. This allows the user to apply all of the power of TOP-TRIZ to invent potential mechanisms of the failure.
A problem is classified as a Detection or Measurement if there is a need to introduce a detection or measurement or improve existing ones. Since in most cases detection or measurement is needed to control an important process, the best approach is to eliminate the need for detection or measurement. If it is not possible, the method guides to build the most effective detection or measurement system.
A problem is classified as Technology Forecasting if there is a need to invent next product generation, formulate new problems for further improvement of an existing system, maximize utilization of new solutions, develop a road map of innovation and marketing strategy, or develop concepts for a patent umbrella.
In contrast with conventional road mapping of innovation of products based on approximation of trends of some of their parameters without understanding how these desired parameter changes could be achieved, TOP-TRIZ technology forecasting guides inventing the future generations of products.
READ MORE: TRIZ Plus – A Modern Tool for Enhancing Design Innovation
TOP-TRIZ technology forecasting is based on the following trends of evolution of many products from their birth to their decline. Statistically, there is a very high probability that your product will have the same steps in its evolution.
- Increase the degree of Ideality of your system
- Determine the stage of the evolution of your system
- Replace a limited system
- Identify and solve contradictions behind the limits of your system
- Develop the structure of your system
- Simplify your system
- Increase the degree of dynamism
- Change states of stability
- Match properties
- Enhance useful actions
- Alter the zone and/or the duration of useful actions
- Remove a human as a source of energy and/or control
Solving Subsequent Problems
It is rare for a challenging problem to be solved in just one step. Usually, a new concept introduces a new problem. Most often, a solution to the initial problem leads to the deterioration of another aspect, creating a conflict, or reveals the need to modify an available resource (an absent action), or both. In such cases, a subsequent problem is not a reason to reject an idea. TOP-TRIZ guides users in identifying, classifying and solving these subsequent problems.
There is another type of subsequent problem to consider. No matter how good your new concept is, there are always next steps according to technology forecasting. The question is, why not reveal these steps right away? Many engineers view technology forecasting as a tool for road mapping innovation, and as a result, they don’t apply it while solving a single problem. This oversight leads to missed opportunities to enhance their best concepts further. TOP-TRIZ encourages the proactive use of technology forecasting to reveal these next steps, helping to refine and improve the solution continuously.
TOP-TRIZ Solution Process
Zinovy Royzen, TRIZ Consulting
The TOP-TRIZ Solution Tree is a guide for review and course correction of any step or misstep along the problem-solving algorithm.
TOP-TRIZ allows users to document every step of the solution process and guides them in constructing a Solution Tree for each problem. This approach enables the review and correction of any step, if needed. The documented steps can then be used as reference samples for solving future challenges, ensuring consistency and efficiency in the problem-solving process.
TOP-TRIZ guides you in your project by helping you formulate an exhaustive set of problems worth solving and an exhaustive set of the best solutions to them. This process maximizes the use of available resources, enabling the development of better products at a lower cost.
Enhancing TOP-TRIZ Problem Solving with AI
Sometimes, the best concepts developed in TOP-TRIZ problem-solving process lead to the need for knowledge outside of the participants’ domains of expertise of a project area, since no one is an expert in everything. Here, AI is used as a universal subject matter expert, producing impressive outcomes and significantly cutting down the problem-solving duration. However, one current limitation of AI tools is they are typically limited to solving those problems that require only one single step change to arrive at the best solution.
Yet most problems can’t be solved in one step. Here, the TOP-TRIZ process provides guidance to frame questions that pertain to individual steps.
Also, our efforts in training AI with TOP-TRIZ have yielded astonishing results. Some issues that dogged our clients for years and took hours to resolve using TOP-TRIZ were solved in a matter of minutes when approached with AI trained on TOP-TRIZ.
Innovator’s Engine, TOP-TRIZ Problem Formulator and Solver Software
The software guides users through all steps of TOP-TRIZ Problem Formulation, solving conflicts, building sufficient functions, harmful action elimination and revealing the causes of a failure. The software states subsequent problems and offers corresponding methods to solve them. Having the same steps and terminology as the TOP-TRIZ training materials, it also helps its users to learn TOP-TRIZ. The software is very effective for facilitation of teams even if they’re not familiar with TOP-TRIZ.
TOP-TRIZ methodology guides you through your project by helping you formulate an exhaustive set of problems related to your system, the current issue and your objectives. It then leads you to develop a comprehensive set of the best solutions. These solutions serve as the foundation for selecting the most ideal solutions for immediate, near-term and long-term implementation. By having an exhaustive set of commercially applicable solutions, you create a strong basis for reliable patent protection, securing your business's innovation for years to come.
TOP-TRIZ methodology offers significant advantages for systematic innovation, helping to develop better products and processes at a lower cost and in less time, all while remaining user-friendly. It enables faster generation of elegant and valuable solutions to even the most challenging design and manufacturing problems, accelerating projects and saving substantial man-hours and money.
References
- Altshuller, Genrich, And Suddenly the Inventor Appeared: TRIZ, the Theory of Inventive Problem Solving. Translated by Lev Shulyak. Worchester, Massachusetts, Technical Innovation Center, Inc., 1996. Page 14.
- “Application of TRIZ in Value Management and Quality Improvement” at Society of American Value Engineers International Conference in 1993 and published in its 1993 SAVE Proceeding, pages 94-101. It can be found also at https://www.trizconsulting.com/TRIZApplicationinValueManagement.pdf
- Tools of Classical TRIZ. Southfield, Mich., Ideation International, Inc., 1999.
- Royzen, Zinovy, Tool, Object, Product (TOP) Function Analysis, TRIZCON99, The First Symposium on TRIZ Methodology and Application of Altshuller Institute for TRIZ Studies, March 7-9, 1999, Novi, Mich., Pages 17-30.
- Royzen, Zinovy, Designing and Manufacturing Better Products Faster Using TRIZ, Seattle, Wash., TRIZ Consulting, Inc., 2024.
About the Author
Zinovy Royzen | President, TRIZ Consulting, Inc.
Zinovy Royzen, president of TRIZ Consulting, Inc., Seattle, is a TRIZ Master. He holds an M.S. in mechanical engineering and has led TRIZ training and provided facilitation at companies and organizations across industries, including Boeing, Bridgestone, Dexcom, Eastman Kodak, Ford Motor Company, Harley-Davidson Motor Company, Hewlett-Packard, Illinois Tool Works, Inficon, Ingersoll Rand, Kimberly-Clark, LG Electronics, Lucent Technologies, Michelin, National Semiconductor, NASA, Philips, Plug Power, Rolls-Royce, Samsung, Siemens, Western Digital and Xerox.
Center, Inc., 1996. Page 14. - “Application of TRIZ in Value Management and Quality Improvement” at Society of American Value Engineers International Conference in 1993 and published in its 1993 SAVE Proceeding, pages 94-101. It can be found also at https://www.trizconsulting.com/TRIZApplicationinValueManagement.pdf
- Tools of Classical TRIZ. Southfield, Mich., Ideation International, Inc., 1999.
- Royzen, Zinovy, Tool, Object, Product (TOP) Function Analysis, TRIZCON99, The First Symposium on TRIZ Methodology and Application of Altshuller Institute for TRIZ Studies, March 7-9, 1999, Novi, Mich., Pages 17-30.
- Royzen, Zinovy, Designing and Manufacturing Better Products Faster Using TRIZ, Seattle, Wash., TRIZ Consulting, Inc., 2024.
About the Author
Zinovy Royzen | President, TRIZ Consulting, Inc.
Zinovy Royzen, president of TRIZ Consulting, Inc., Seattle, is a TRIZ Master. He holds an M.S. in mechanical engineering and has led TRIZ training and provided facilitation at companies and organizations across industries, including Boeing, Bridgestone, Dexcom, Eastman Kodak, Ford Motor Company, Harley-Davidson Motor Company, Hewlett-Packard, Illinois Tool Works, Inficon, Ingersoll Rand, Kimberly-Clark, LG Electronics, Lucent Technologies, Michelin, National Semiconductor, NASA, Philips, Plug Power, Rolls-Royce, Samsung, Siemens, Western Digital and Xerox.
- Details
- Category: Inside TRIZ
After Uproar, Google Revives AI-Generated Images Of People, With Limits
NurPhoto via Getty Images
Senior Contributor
Aug 29, 2024
Some Google Gemini users will soon be able to create AI-generated people again.
Google is reinstating a Gemini feature that lets users create AI-generated images of people after suspending the capability earlier this year amid criticism it produced misleading and historically inaccurate depictions.
The company announced in a blog post on Wednesday that in the days to come, the ability to create images of people from within the company’s Gemini suite of AI models will return to Gemini Advanced, Business, and Enterprise customers. Like other image-generation tools such as Midjourney, Stable Diffusion, and OpenAI’s Dall-E, Google’s Imagen 3 lets users instantly turn typed or spoken text prompts into visual representations.
“We’ve worked to make technical improvements to the product, as well as improved evaluation sets, red-teaming exercises, and clear product principles,” reads the post from Dave Citron, senior director of product management for Gemini Experiences. Red teaming means simulating harmful behavior to spot and correct product vulnerabilities.
Two weeks after Google launched Gemini in February, the company paused some image-generation features when critics—most notably tech entrepreneur Elon Musk, who’s working on rival AI products—accused the tool of being “woke” for producing images of people who didn’t align with the historical ethnic and gender realities of the times they represented. One controversial photo, for example, depicted Black men and women wearing World War II-era German military uniforms, while another showed a female pope.
Such images quickly went viral, and Jack Krawczyk, a Google AI product lead who on social media often welcomed user input on the company’s offerings, reduced his public online presence after being harassed over the issue.
Sundar Pichai, CEO of Alphabet and its Google subsidiary sent an internal memo to employees saying the image issues “have offended our users and shown bias.” He added, “to be clear, that’s completely unacceptable and we got it wrong.” Google’s stock value fell as the company rushed to respond to the controversy.
Top of Form
Bottom of Form
Humans Created By Imagen, The Sequel
Now, almost seven months later, the capability to turn prompts into images of people is back, at least for some users, starting in English.re
“With Imagen 3, we’ve made significant progress in providing a better user experience when generating images of people,” the blog reads.
The post includes several examples of images generated by Imagen, but none of them involve humans. Instead, one shows an image produced by the cute, non-controversial prompt “tiny dragon hatching from an egg in a sunlit meadow, surrounded by curious glowing butterflies.” Another shows a photorealistic image of a mountain vista.
People, Yes. Minors Or Sexual Images, No
The tool, however, “will not support” the generation of photorealistic, identifiable people, the post indicated. Nor will it support depictions of minors or excessively gory, violent, or sexual images. A Google spokesperson clarified in an email that “will not support” means the tool won’t create such images when prompted to do so.
“We design our systems not to create offensive content in the first place and we have additional checks on output,” the spokesperson said. “While certain prompts might test these safeguards, and the results may not always be perfect, we’ve made significant progress in ensuring a more responsible and aligned user experience when generating images of people.”
Follow -- Journalist Leslie Katz, a Forbes contributor since October 2023, covers science and consumer technology, often focusing on how they overlap with art.
TRIZ Application: Making a more Ideal system by using waste as a resource, Principle #25 - Self-service, and Principle #24 using tungsten as a mediator.
- Details
- Category: Inside TRIZ
Wood waste turned catalyst
could unlock affordable green hydrogen from ocean
Thanks to its three-dimensional porous structure, derived from wood-waste carbon, the electrode boasts a large surface area that promotes efficient reactions and charge transfer.
(Representative image) iStock
The electrode's structure evolves during oxygen evolution, forming an anti-corrosive layer and boosting stability.
Scientists have achieved a significant breakthrough in the production of green hydrogen.
They have created a novel kind of electrode, named the W-NiFeS/WC (W-doped nickel-iron (NiFe) sulfide/Wood-based carbon) electrode, which performs significantly well in seawater electrolysis.
Electrolysis is the process of splitting seawater into hydrogen and oxygen using electricity, and the latest development could revolutionize this process, giving a significant boost to sustainable energy.
Australia Is Generating Too Much Solar Power
This research, published in the journal Science Bulletin, tackles the challenges of seawater electrolysis while showcasing the potential of repurposing wood waste in electrochemical devices.
Path to decarbonization
Seawater electrolysis is seen as a promising method to reduce carbon emissions in the energy sector. It is expected to offer a clean and plentiful source of hydrogen fuel.
Despite its potential, issues such as anode corrosion, unwanted side reactions, and expensive catalysts have hindered its widespread adoption.
The W-NiFeS/WC electrode can tackle these issues. It has demonstrated superior activity and stability in both the key reactions needed for seawater electrolysis: the oxygen evolution reaction (OER) and the hydrogen evolution reaction (HER).
The electrode’s performance comes from its unique structure and chemistry. Its three-dimensional porous structure, derived from wood-waste carbon, provides a large surface area for reactions and efficient charge transfer. Densely anchored W-NiFeS nanoparticles further enhance its capabilities.
“Wood-based carbon (WC) structures have gained attention as an ideal substrate for these active materials due to their hierarchical porous nature and excellent conductivity,” the researchers mentioned in the press release.
Adding tungsten to the catalyst improves its anti-corrosion properties and stability, ensuring durability in seawater.
Self-healing mechanism and efficient catalysis
During the oxygen evolution reaction, this electrode undergoes a structural change that forms anti-corrosive materials on its surface, which boosts its stability.
“Especially, the in-situ structure evolution of W-NiFeS/WC in OER generates anti-corrosive tungstate and sulfate species on the surface of active Ni/Fe oxyhydroxides,” explained Zhijie Chen, the first author of the study.
“Also, the self-evolved W-NiFeS decorated NiFeOOH can catalyze HER efficiently,” added Chen. It further supports hydrogen production.
Not only is the W-NiFeS/WC electrode effective, but it’s also affordable to produce. This makes it ideal for large-scale applications in seawater electrolysis.
This could help reduce the cost of green hydrogen, making it a competitive alternative to fossil fuels.
This research also highlights the importance of a circular economy. By turning wood waste into valuable catalysts, scientists have shown a sustainable way to produce clean energy.
Towards a greener future
The innovative use of wood-waste-derived carbon structures in the electrode design opens doors for further advancements in sustainable material science.
The development of the W-NiFeS/WC electrode is a major step towards sustainable green hydrogen production. Its strong performance, affordability, and environmental friendliness make it a promising technology for a future powered by renewable energy.
Ongoing research in this area is likely to lead to further advancements, bringing us closer to a world of clean and abundant energy for everyone.
TRIZ Application: Making a more Ideal system by using waste as a resource, Principle #25 - Self-service, and Principle #24 using tungsten as a mediator.
- Details
- Category: Inside TRIZ
Creating the Self-Healing Grid of the Future
July 25, 2024
Sandia National Laboratories electrical engineer Michael Ropp and his team have created a library of codes to improve the resilience, reliability, and self-healing nature of the electric grid. (Image: Craig Fritz)
It’s not hard to imagine the potential value of a self-healing grid, one able to adapt and bounce back to life, ensuring uninterrupted power even when assailed by a hurricane or a group of bad actors. Together a team from Sandia National Laboratories and New Mexico State University is making this vision possible with a cutting-edge library of algorithms. By coding these algorithms into grid relays, the system can quickly restore power to as many hospitals, grocery stores, and homes as possible before grid operators can begin repairs or provide instructions.
“The ultimate goal is to enable systems to self-heal and form ad hoc configurations when things go really bad,” said Michael Ropp, Sandia electrical engineer and the project lead. “After the system is damaged or compromised, it can automatically figure out how to get to a new steady-state that provides power to as many customers as it possibly can; that’s what we mean by ‘self-healing.’ The key is that we’re doing it entirely with local measurements, so there is no need for expensive fiber optics or human controllers.”
The electrical grid of the future, as envisioned by Ropp and many others, will have more renewable energy supplies such as rooftop solar panels and wind turbines, along with local energy storage systems such as banks of batteries. Many of these systems will be able to form microgrids — small “islands” of power around hospitals, water treatment plants, and other critical infrastructure even if the main grid is down. This Sandia project enables those microgrids to automatically heal themselves when damaged and connect with one another to share power and serve as many customers as possible.
While microgrids can increase the resiliency of the grid, they need to automatically perform certain critical functions like balancing energy production with energy consumption and reconfiguring if part of the system becomes damaged or unavailable. This self-healing capability must also be able to avoid connecting microgrids in ways that cause problems — for example, by forming an unintentional loop in the circuit.
Today, to achieve this in microgrids that use power inverters, operators must install expensive high-speed communications that can be unreliable during disasters and vulnerable to cyberattacks. The purpose of this project, Ropp said, is to support self-healing using only the measurements that each individual device can make, reducing cost while increasing reliability.
One key function that microgrids with lots of inverters need to do is to shut off a few customers when the demand for electricity becomes larger than the supply. In grids powered by natural gas, coal, or nuclear power plants, when this demand-supply imbalance occurs, the frequency of the grid drops. When the existing relay algorithms detect this, they disconnect power to portions of the grid. However, when inverters designed to power microgrids become overloaded, they stop regulating the voltage of the power supply, and the voltage drops, Ropp said. The team developed an algorithm to use this decrease in voltage to tell relays when to disconnect power to less vital customers.
During the wake of a natural disaster such as a hurricane or earthquake, hospitals assisted living facilities, and water treatment plants are especially vital and thus critical to keep powered. Banks, grocery stores, and recreation centers or schools that serve as evacuation centers are also quite important for the functioning of a community.
The team also developed algorithms that allow the system to self-assemble in ways that avoid damaged areas. They used computer-aided-design software to model a small system of three interconnected microgrids and showed how even without communications, their algorithms allowed the system to balance power production and consumption, isolate certain issues such as tree-downed lines or a damaged power plant, and work around the issue to restore power to important facilities, Ropp said.
Most of North America’s grid infrastructure was designed to have single power lines with one-way power flow to houses, offices, and other average customers. Thus, the grid is not currently designed to be stable when operated in a loop, said Ropp and Matthew Reno, another Sandia electrical engineer involved in the project. Only certain custom-designed portions of the system can operate as a loop.
Microgrids and distributed resources like rooftop solar increase overall resiliency but also allow the chance for the grid to assemble into an unstable loop. Reno said, “We were trying to come up with possible measurements to figure out if the two sides were already connected so that closing the switch would form a loop.”
The team looked at some mathematical methods a breaker could use to determine whether the portions of the grid on either side of the breaker are powered by the same power supply and determined that two such methods worked for this purpose. The researchers shared a comparison of these methods in a paper published in the scientific journal IEEE Transactions on Power Delivery.
The team is also working on a solution to a similar problem: what to do when a power line that normally is at the end of the system finds itself supporting more current than it is rated for. They’ve developed a Morse-code-like method where an overloaded line relay modulates the voltage by opening and closing in a specific pattern, so the relays for lower-priority customers can detect this pattern and disconnect themselves until the line is no longer overloaded, Ropp said. While this could be considered communication, it doesn’t need a separate system, which might be vulnerable to hackers, or a human operator — it uses the power line itself to transmit the signal.
The researchers have been working on ways to improve the performance of these methods. For example, they have developed a method to quickly divide the microgrid into smaller sub-microgrids when an issue is detected. The hope is that this would isolate the issue to just one sub-microgrid, allowing the others to operate normally. The team’s initial testing suggests that this method of defining microgrid boundary points works sometimes, but not all the time, so there is more work to do.
Ropp and the team would like to work with manufacturers of line and load relays to incorporate their library of algorithms into the companies’ products, first to test them in a hardware-in-the-loop testbed and then possibly in real life at test facilities such as Sandia’s Distributed Energy Technologies Laboratory or at a similar medium-voltage facility at New Mexico State University, Lavrova said.
“We want this to become something that people can really use, especially low-income communities that can’t afford fiber optic communications at every single point on every single electrical circuit,” Ropp said. “You can get very good performance and very good resilience using our library of algorithms. And if you do have the communications, this can still be a backup.”
TRIZ: Using the concept of Ideal Final Result the managers are setting a goal that would optimize the performance of the energy grid.
The TRIZ principle of "self-service" to fix the grid if there is a distribution or failure problem.
- Details
- Category: Inside TRIZ
Desalination Breakthrough uses the Sun, not Electricity, to Clean Seawater
The desalination process works using a channel that has a temperature gradient across it and moves salt to a cooler temperature.
Updated: May 21, 2024, 09:06 AM EST
Representational image: A desalination plant. iStock
Researchers at the Australian National University have developed a new approach for desalinating water that does not use electricity.
The method uses solar energy and can be deployed in remote locations, even in low-income countries.
With freshwater shortages seen in multiple parts of the globe, countries are turning to seawater and desalinating it to meet their water demands.
The World Bank estimates that as many as 300 million people in 150 countries depend on desalination for their water needs.
However, desalination is energy intensive. In 2018, it accounted for 100 billion kilowatts of electricity consumption, a fourth of the energy spent on provisioning water. This can be attributed to techniques such as reverse osmosis, which uses high pressure to separate water or heat for evaporating water.
A research team led by Juan Felipe Torres, a professor at the School of Engineering at ANU, has turned to solar energy to reduce energy consumption by as much as 80 percent.
Low-grade heat for desalination
The researchers use a phenomenon called thermodiffusion, a temperature gradient to move salt from the warmer to the colder side to bring about desalination. In this process, water remains in the liquid phase, and no energy is spent turning it into vapor and cooling it back.
In a technology demonstration, the researchers used a narrow channel for the seawater. They sandwiched it between two plates maintained at different temperatures. The top plate was heated to over 140 Fahrenheit (60 degrees Celsius), while the lower plate was cooled to 68 Fahrenheit (20 degrees Celsius).
The channel was a little over one and a half feet long, and low-salinity water emerged from the top while high-salinity water emerged from its bottom. After a single pass, cooler and saltier water was removed, and warmer and less salty water was put back into the setup.
Each pass saw the water’s salinity decrease by three percent, and using multiple cycles, the salinity decreased from 30,000 parts per million to less than 500 ppm.
Interestingly, the heat needed to carry out the process can come directly from sunlight or even waste heat generated during industrial processes.
Solving desalination problems
Conventional desalination approaches can consume up to 100 kWh of electricity for every cubic meter of clean water generated. The process is energy-intensive and requires expensive materials, such as membranes that are high maintenance and prone to corrosion.
Desalination plants are energy and maintenance-intensive, making them economically unfeasible for low-income countries.
“Thermodiffusive desalination is the first thermal desalination method that does not require a phase change,” said Torres in a press release. “It’s operated entirely in the liquid phase, and what’s more important is that it does not require membranes or other types of ion-adsorbing materials to purify water.”
The lack of membranes makes thermodiffusive desalination ideal for deployment on a large scale. The researchers are now building a multichannel device to be used on the Tonga island facing a severe drought.
The device will be powered by a solar panel, no bigger than a human face.
“Our dream is to enable a paradigm shift in desalination technology, based on methods that can be driven by low-temperature heat in our surrounding environment,” Torres told Tech Xplore.
TRIZ: Making the system more IDEAL by not using expensive electricity and trimming other elements like filters and membranes. Changing the one pass into multiple passes simplifies the process and promotes principle #20, continuous action. This process exemplifies using FREE and readily available resources (the sun).
- Details
- Category: Inside TRIZ
Fits into home joist spaces
March 20, 2024
The Midea IN cassette is a breakthrough in heat pump technology given its built-in design, blend-in style, and energy efficiency. The IN cassette is ready to fit into home joist spaces, making it an ideal option for new homes, add-ons, and conversions at a later date. Users can precisely heat and cool the needed spaces and spots throughout their homes – its industry-exclusive compact size of 50.3" W x 13.19" D x 9" H means it can fit just about anywhere.
With the hang-up installation, hangers with optimized anti-cutting design are easy to grab and lift, preventing hands from scratching by the sharp edge. The push-in installation technology contains a unique and exclusive screw-in design for easy installation options. Installers can plug in the IN-cassette unit between the joist and fix it on the beams with screws.
There is no more climbing up and down, as the elevation panel itself can lower and raise. By activating the elevation panel function on the remote, or smart controller, the panel will go straight down to you for easy filter access. The built-in water pump can discharge the condensed water, so there’s no need to add an extra water pump to the side of the unit.
With high energy efficiency (up to 18.4 SEER2), cold climate performance (capable of 100% heat output down to -4 degrees Fahrenheit), and compatibility (able to mix and match with existing ducted and ductless equipment), Midea is working to make heat pump technology mainstream in American homes.
TRIZ influence in everyday new designs: Evolution #6 - scaling up or down. The unit has been made smaller to fit into the recessed of the joist space.
Principle #13 - do it in reverse. The unit can be moved down for easy adjustments. The service person does not have to climb to the unit.
Principle #12 - equipotentiality. The unit can be moved for easy access to work.
Principle #7 – Nesting. Using the resource of spaces that are not used in housing construction.
- Details
- Category: Inside TRIZ
The First Battery Prototype Using Hemoglobin
by Andrew Corselli February 2024
A team at the University of Cordoba has developed a battery that uses hemoglobin as an electrochemical reaction facilitator, functioning for around 20-30 days.
“Our hemoglobin-based zinc-air battery has similar working conditions to the human body. Because of that, we consider that is an appropriate battery for any electronic devices integrated into the human body,” said Lead Author Manuel Cano Luna. (Image: The researchers)
Hemoglobin is a protein present in red blood cells and is responsible for conveying oxygen from the lungs to the different tissues of the body (and then transferring carbon dioxide the other way around). It has a very high affinity for oxygen and is fundamental for life, but, what if it were also a key element for a type of electrochemical device in which oxygen also plays an important role, such as zinc-air batteries?
This is what the Physical Chemistry (FQM-204) and Inorganic Chemistry (FQM-175) groups at the University of Córdoba (UCO) wanted to verify and develop, together with a team from the Polytechnic University of Cartagena, after a study by the University of Oxford and a Final Degree Project at the UCO demonstrated that hemoglobin featured promising properties for the reduction and oxidation (redox) process by which energy is generated in this type of system. Thus, the research team developed, through a Proof-of-Concept project, the first biocompatible battery (which is not harmful to the body) using hemoglobin in the electrochemical reaction that transforms chemical energy into electrical energy.
Using zinc-air batteries, one of the most sustainable alternatives to those that currently dominate the market (Li-ion batteries), hemoglobin would function as a catalyst in such batteries. That is, it is a protein that is responsible for facilitating the electrochemical reaction, called the Oxygen Reduction Reaction (ORR), causing, after the air enters the battery, oxygen to be reduced and transformed into water in one of the parts of the battery (the cathode or positive pole), releasing electrons that pass to the other part of the battery (the anode or negative pole), where zinc oxidation occurs.
"To be a good catalyst in the oxygen reduction reaction, the catalyst has to have two properties: it needs to quickly absorb oxygen molecules, and form water molecules relatively easily,” said Lead Author Manuel Cano Luna. “And hemoglobin met those requirements." In fact, through this process, the team managed to get their prototype biocompatible battery to work with 0.165 milligrams of hemoglobin for between 20 and 30 days.
In addition to strong performance, the battery prototype they have developed boasts other advantages. First, zinc-air batteries are more sustainable and can withstand adverse atmospheric conditions, unlike other batteries affected by humidity and requiring an inert atmosphere for their manufacture. Secondly, as Cano Luna noted, "The use of hemoglobin as a biocompatible catalyst is quite promising as regards the use of this type of battery in devices that are integrated into the human body," — e.g., pacemakers. The battery operates at pH 7.4, which is a pH similar to that of blood. In addition, since hemoglobin is present in almost all mammals, proteins of animal origin could also be used.
The research team of the University of Cordoba. (Image: University of Cordoba)
The battery they have developed has some room for improvement, however. The main one is that it is a primary battery, so it only discharges electrical energy. Also, it is not rechargeable. Therefore, the team is already taking the next steps to find another biological protein that can transform water into oxygen and, thus, recharge the battery. In addition, the batteries would only work in the presence of oxygen, so they could not be used in space.
Here is an exclusive Tech Briefs interview — edited for length and clarity — with Cano Luna.
Tech Briefs: What made you choose hemoglobin? How did you decide on that?
Cano Luna: We were inspired by a research paper that reported the electrocatalytic properties of Hemoglobin (Hb) for the Oxygen Reduction Reaction (ORR).
It should be noted that ORR is a key cathodic reaction produced in hydrogen fuel cells and in zinc-air batteries (two important electrochemical devices for energy storage and conversion).
Tech Briefs: What was the biggest technical challenge you faced while developing this battery technology?
Cano Luna: Our major challenge was choosing the best combination of Hb and Nafion, in terms of long-term stability. Hb protein is very soluble in water. Therefore, to keep this protein on the surface of the air electrode, it was necessary to use Nafion.
Basically, Nafion ionomer is a polymer, that acts like a membrane, thus avoiding the loss of Hb from the electrode surface.
Tech Briefs: Can you explain in simple terms how it works?
Cano Luna: Hb acts as an electrocatalyst in the cathodic reaction, adsorbing O2 from the air and reducing this O2 to H2O (ORR) with a four-electron pathway (i.e. performing both actions efficiently).
ORR on a bare working electrode typically leads to the formation of hydrogen peroxide, which involves a two-electron transfer: O2 + 2H+ + 2e- ® H2O2.
While a Hb-modified working electrode allows a four-electron conversion to water (i.e. the desirable one): O2 + 4H+ + 4e- ® H2O.
Tech Briefs: What are the pros and cons of using hemoglobin batteries?
Cano Luna: Main advantage: The use of a biological protein as an ORR catalyst and an aqueous electrolyte solution at a neutral pH of 7.4 (i.e. physiological conditions) makes zinc-air batteries much more biocompatible, less dangerous, and more environmentally friendly.
Related Articles
Fundamental EV Battery Models Explain New Tab Design
Internal Short Circuit (ISC) Device Helps Improve Li-ion Battery Design
Main disadvantage: The use of a biological molecule as an electrocatalyst limits the working conditions of this energy storage device, causing it to be not useful in extreme conditions (e.g. high temperature causes protein denaturation).
Tech Briefs: The article says, “The team is already taking the next steps to find another biological protein that can transform water into oxygen and, thus, recharge the battery.” How is that coming along? Any updates you can share?
Cano Luna: Currently, we are testing several proteins as possible electrocatalysts for Oxygen Evolution Reaction (OER). Once we choose the best candidate, the next step will be to identify a perfect combination with Hb, aiming for a “synergistic effect” between both proteins. OER is the reaction for charging the zinc-air battery: 2H2O ® O2 + 4H+ + 4e-.
Tech Briefs: What are your next steps? Any other future research/work/etc. on the horizon?
Cano Luna: Our current efforts are focused on making this battery rechargeable, using only biological molecules as electrocatalysts. We consider that as a new research line in the field of zinc-air battery
- Details
- Category: Inside TRIZ
Breakthrough: Artificial DNA opens the door to designer proteins
Researchers used a synthetic DNA system called AEGIS to design two artificial nucleotides that flawlessly mimic the geometry of natural nucleotides.
by Rizwan Choudhury Published: Dec 17, 2023 08:02 AM EST
Concept image of genetic codes in DNA. ktsimage/iStock
DNA, the molecule that stores the genetic information of all living things, is made up of just four chemical letters, or nucleotides. But what if we could add more letters to this alphabet and create new kinds of DNA?
That's what a team of researchers from the University of California San Diego, the Foundation for Applied Molecular Evolution, and the Salk Institute for Biological Studies have done. They have developed a new version of DNA with six letters instead of four, showing that it can be used to make proteins, the building blocks of life.
This feat, published in Nature Communications, opens doors to a future where custom-designed proteins and novel biological applications could become a reality.
Four nucleotides
DNA, the blueprint of life, encodes its instructions using just four nucleotides – adenine (A), thymine (T), guanine (G), and cytosine (C). These nucleotides pair in specific configurations, forming the iconic double helix. But what if this alphabet could be expanded? The implications are compelling, ranging from personalized medicine to revolutionary materials.
"Life on Earth is amazingly diverse with just four nucleotides, so imagine what we could do with more," said Dong Wang, Ph.D., a professor at Skaggs School of Pharmacy and Pharmaceutical Sciences at UC San Diego and the senior author of the study.
"By expanding the genetic code, we could create new molecules that have never been seen before and explore new ways of making proteins as therapeutics."
Wang and his colleagues used a synthetic DNA system called AEGIS, which stands for Artificially Expanded Genetic Information System. AEGIS was created by Steven A. Benner, PhD, at the Foundation for Applied Molecular Evolution, as part of a NASA-funded project investigating how life could have evolved in other planets.
Dr. Dong Wang aptly describes that adding new 'letters' to the genetic code expands the vocabulary of life, allowing us to write more complex narratives." His team's breakthrough demonstrates that cells can readily incorporate synthetic nucleotides into the DNA recipe.
Using AEGIS
AEGIS adds two new letters to the standard DNA alphabet, which consists of adenine (A), thymine (T), guanine (G), and cytosine. These letters pair up in a specific way to form the double-helix structure of DNA, which was discovered by James Watson and Francis Crick in 1953.
The new letters, Z and P, have the same shape and size as the natural ones, so they can fit into the DNA helix without disrupting its geometry. This means that the enzymes that read and copy DNA, such as RNA polymerase, can recognize and process AEGIS DNA just like natural DNA.
RNA polymerase
The key lies in mimicking nature's machinery. The researchers identified RNA polymerase, a key enzyme that converts DNA into RNA, which is then used to make proteins. They designed two artificial nucleotides that flawlessly mimic the geometry of natural nucleotides. RNA polymerase readily accepted these novel additions when tested, seamlessly incorporating them into transcription.
The researchers tested whether RNA polymerase from bacteria could transcribe AEGIS DNA into RNA and found that it could do so with high accuracy and efficiency.
"This is a remarkable demonstration of how robust and adaptable the biological machinery is," said Wang. "By mimicking the natural shape of DNA, our synthetic letters can sneak in and be used to make new proteins."
This breakthrough paves the way for exciting possibilities. Imagine designing proteins with tailor-made properties capable of precisely targeting tumors for cancer therapy or engineering bacteria to synthesize eco-friendly biofuels. The vast horizons extend beyond medicine and environmental applications to materials science and potentially even synthetic biology.
Of course, challenges remain. Optimizing the incorporation of new nucleotides, ensuring their stability within the genome, and deciphering the full potential of this expanded code are areas for further exploration. Yet, the foundation for rewriting the genetic lexicon has been laid.
This discovery signifies a momentous leap in our understanding of life's blueprint. It holds the promise of ushering in a new era of biological design, where the possibilities are limited only by our imagination.
"These new proteins could have applications in medicine, biotechnology, and bioengineering," said Wang. "We are only scratching the surface of what we can do with artificial DNA."
Study abstract:
Artificially Expanded Genetic Information Systems (AEGIS) add independently replicable unnatural nucleotide pairs to the natural G:C and A: T/U pairs found in native DNA, joining the unnatural pairs through alternative modes of hydrogen bonding. Whether and how AEGIS pairs are recognized and processed by multi-subunit cellular RNA polymerases (RNAPs) remains unknown. Here, we show that E. coli RNAP selectively recognizes unnatural nucleobases in a six-letter expanded genetic system. High-resolution cryo-EM structures of three RNAP elongation complexes containing template-substrate UBPs reveal the shared principles behind the recognition of AEGIS and natural base pairs. In these structures, RNAPs are captured in an active state, poised to perform the chemistry step. At this point, the unnatural base pair adopts a Watson-Crick geometry, and the trigger loop is folded into an active conformation, indicating that the mechanistic principles underlying recognition and incorporation of natural base pairs also apply to AEGIS unnatural base pairs. These data validate the design philosophy of AEGIS unnatural base pairs. Further, we provide structural evidence supporting a long-standing hypothesis that pair mismatch during transcription occurs via tautomerization. Together, our work highlights the importance of Watson-Crick complementarity underlying the design principles of AEGIS base pair recognition.
- Details
- Category: Inside TRIZ
AI to speed up clinical trials
By Natalie Grover and Martin Coulter
September 22, 2023, 7:00 AM EDT Updated 3 days ago
A robot miniature and the words 'Pharmaceutical Research - AI Artificial Intelligence' are seen in this illustration taken, on July 17, 2023.
LONDON, Sept 22 (Reuters) - Major drugmakers are using artificial intelligence to find patients for clinical trials quickly, or to reduce the number of people needed to test medicines, both accelerating drug development and potentially saving millions of dollars.
Human studies are the most expensive and time-consuming part of drug development as it can take years to recruit patients and trial new medicines in a process that can cost over a billion dollars from the discovery of a drug to the finishing line.
Pharmaceutical companies have been experimenting with AI for years, hoping machines can discover the next blockbuster drug. A few compounds picked by AI are now in development, but those bets will take years to play out.
Reuters interviews with more than a dozen pharmaceutical company executives, drug regulators, public health experts, and AI firms show, however, that the technology is playing a sizeable and growing role in human drug trials.
Companies such as Amgen (AMGN.O), Bayer (BAYGn.DE), and Novartis (NOVN.S) are training AI to scan billions of public health records, prescription data, medical insurance claims, and their internal data to find trial patients - in some cases halving the time it takes to sign them up.
"I don't think it's pervasive yet," said Jeffrey Morgan, managing director at Deloitte, which advises the life sciences industry. "But I think we're past the experimentation stage."
The U.S. Food and Drug Administration (FDA) said it had received about 300 applications that incorporate AI or machine learning in drug development from 2016 through 2022. Over 90% of those applications came in the past two years and most were for the use of AI at some point in the clinical development stage.
ATOMIC AI
Before AI, Amgen would spend months sending surveys to doctors from Johannesburg to Texas to ask whether a clinic or hospital had patients with relevant clinical and demographic characteristics to participate in a trial.
Existing relationships with facilities or doctors would often sway the decision on selecting trial sites.
However, Deloitte estimates about 80% of studies miss their recruitment targets because clinics and hospitals overestimate the number of available patients, there are high dropout rates or patients don't adhere to trial protocols.
Amgen's AI tool, ATOMIC, scans troves of internal and public data to identify and rank clinics and doctors based on past performance in recruiting patients for trials.
Enrolling patients for a mid-stage trial could take up to 18 months, depending on the disease, but ATOMIC can cut that in half in the best-case scenario, Amgen told Reuters.
Amgen has used ATOMIC in a handful of trials testing drugs for conditions including cardiovascular disease and cancer, and it aims to use it for most studies by 2024.
The company said by 2030, it expects AI will have helped it shave two years off the decade or more it typically takes to develop a drug.
The AI tool Novartis uses has also made enrolling patients in trials faster, cheaper, and more efficient, said Badhri Srinivasan, its head of global development operations. But he said AI in this context is only as good as the data it gets.
In general, less than 25% of health data is publicly available for research, according to Sameer Pujari, an AI expert at the World Health Organization.
EXTERNAL CONTROL ARMS
German drugmaker Bayer said it used AI to cut the number of participants needed by several thousand for a late-stage trial for Asundexian, an experimental drug designed to reduce the long-term risk of strokes in adults.
It used AI to link the mid-stage trial results to real-world data from millions of patients in Finland and the United States to predict the long-term risks in a population like the trial.
Armed with the data, Bayer started the late-stage trial with fewer participants. Without AI, Bayer said it would have spent millions more, and taken up to nine months longer to recruit volunteers.
Now the company wants to take it a step further.
For a study to test asundexian in children with the same condition, Bayer said it plans to use real-world patient data to generate a so-called external control arm, potentially eliminating the need for patients to take a placebo.
That's because the condition is so rare in the age group that it would be difficult to recruit patients and could raise concerns about whether it was ethical to give trial participants a placebo when there are no proven treatments available.
Instead, Bayer aims to mine anonymized real-world data of children with similar vulnerabilities.
Bayer said it hoped that would be enough to help discern how effective the drug is. Finding real-world patients by mining electronic patient data can be done manually, but using AI speeds up the process dramatically.
While unusual, external control arms have been used in the past instead of traditional randomized control arms where half the participants take a placebo - mainly for rare diseases where there are few patients or no existing treatments.
Amgen's drug Blincyto, designed to treat a rare form of leukemia, received U.S. approval after adopting this approach, although the company had to conduct a follow-up study to confirm the drug's benefit once it was on sale.
Blythe Adamson, senior principal scientist at Roche (ROG.S) subsidiary Flatiron Health, said the advantage of AI was that it let scientists examine real-world patient data quickly, and at scale.
She said it could take months to trawl through data from 5,000 patients using traditional methods: "Now we can learn those same things for millions of patients in days."
OVERESTIMATION RISK
Drugmakers typically seek prior approval from regulators to test a drug using an external control arm.
Bayer said it was in discussions with regulators, such as the FDA, about now relying on AI to create an external arm for its pediatric trial. The company did not offer additional details.
The European Medicines Agency (EMA) said it had not received any applications from companies seeking to use AI in this way.
Some scientists, including the FDA's oncology chief, are worried drug companies will try to use AI to come up with external arms for a broader range of diseases.
"When you're comparing one arm without randomization to another arm, you are assuming that you have the same populations in both. That doesn't account for the unknown," said Richard Pazdur, director of the FDA's Oncology Center of Excellence.
Patients in trials tend to feel better than people in the real world because they believe they are getting an effective treatment and also get more medical attention, which could in turn overestimate the success of a drug.
This risk is one of the reasons regulators tend to insist on randomized trials as all patients believe they are getting the drug, even though half are on a placebo.
Gen Li, founder of clinical data analytics firm Phesi, said many companies were exploring AI's potential to reduce the need for control groups.
Regulators, however, say that although AI has the potential to augment the clinical trial process, evidentiary standards for a drug's safety and effectiveness will not change.
"The main risks with AI are that we want to make sure we don't get the wrong answer to the question of whether a drug works," said John Concato, associate director for real-world evidence analytics in the Office of Medical Policy in the FDA's Center for Drug Evaluation and Research.
Reporting by Natalie Grover and Martin Coulter in London; Additional reporting by Julie Steenhuysen in Chicago; Editing by Josephine Mason and David Clarke
- Details
- Category: Inside TRIZ
By Bill Schweber June 1, 2023
Researchers are investigating an array of different skin-wearable patches for medical monitoring and healing.
What You’ll Learn
- Why patch-based wearable devices for medical monitoring are gaining more research attention.
- How these devices are fabricated to achieve specific objectives.
- How some wearable devices implement monitoring and others add control.
Medical-related electronics research is receiving lots of attention (and funding) due to the confluence of various technical, medical, population, and other factors. How much influence each of these has could be discussed at length without a firm conclusion. But one thing is clear: Much of the work being done, especially in university-related settings, is centered on wearable skin patches that provide different types of functionality.
Why patches? The question should really be “Why not?” They provide a direct way to accomplish various objectives, are relatively easy to monitor, don’t require invasive surgery, and are “student-research” friendly. They are certainly much more easily evaluated and tested than implanted devices, which also accelerates the entire development and evaluation cycle.
The four recent patch-related projects cited here are just a few of the ones that have been recently published. Note that each has a different focus, showing the breadth of patch-related research now underway.
Example #1: Screen-Printed Electrodes
A multi-institutional research effort led by Washington State University has demonstrated electrodes that can be created using just screen printing. The resultant stretchable, durable circuit patterns can be transferred to fabric and worn directly on human skin. In contrast, current commercial manufacturing of wearable electronics requires fairly expensive processes involving clean rooms.
While some other implementations presently use screen printing for parts of the process, this new method relies strictly on screen printing.
They used a multi-step process of layering polymers and metal inks to create snake-like structures of the electrode. The screen printing of the polyimide (PI) layer enables facile, low-cost, scalable, high-throughput manufacturing.
PI mixed with poly(ethylene glycol) exhibited a shear-thinning behavior, significantly improving the printability of PI. The premixed Ag/AgCl ink is then used for conductive layer printing. Multiple electrodes are printed onto a pre-treated glass slide, which allows them to be easily peeled off and transferred onto fabric or other material (Fig. 1).
1. For the stretchable, conformable patches, the multistep process involves layering polyimide (PI) and conductive Ag/AgCl inks to create structures of the electrode.
While the resulting thin pattern appears delicate, the electrodes aren’t fragile. The serpentine pattern of the screen-printed electrode accommodates natural deformation under stretching (30%) and bending conditions (180 degrees), which are verified by computational and experimental studies (Fig. 2).
2. This bottom view of the device doesn’t show most of the inner workings (ACF is anisotropic conductive film).
After printing the electrodes, the researchers transferred them onto an adhesive fabric that was then worn directly on the skin by volunteers. The wireless electrodes with an on-board associated circuit, including a 2.4-GHz Bluetooth link, accurately recorded heart and respiratory rates, sending the data to a mobile phone (Fig. 3)ate University
3. An exploded view of the device provides more perspective on the completed arrangement.
In addition to Washington State University, the team included the Georgia Institute of Technology and Pukyong National University in South Korea. The work is detailed in the American Chemical Society (ACS) paper “Fully Screen-Printed PI/PEG Blends Enabled Patternable Electrodes for Scalable Manufacturing of Skin-Conformal, Stretchable, Wearable Electronics,” with more details in the Supporting Information file.
Example #2: Electronic Bandage Speeds Healing
Northwestern University researchers developed a small, flexible, stretchable bandage—claimed to be a first-of-its-kind—that accelerates healing by delivering electrotherapy directly to the wound site. In an animal study, the new bandage healed diabetic ulcers 30% faster than in mice without the bandage. The bandage also actively monitors the healing process and then harmlessly dissolves—electrodes and all—into the body after it is no longer needed.
Injuries can disrupt the body’s normal electrical signals. The researchers wanted to see if electrical stimulation therapy could help close small, stubborn wounds where high glucose levels due to diabetes also thicken capillary walls, slowing blood circulation and making it more difficult for these wounds to heal. By applying electrical stimulation, it restores the body’s normal signals, attracting new cells to migrate to the wound bed.
“Although it’s an electronic device, the active components that interface with the wound bed are entirely resorbable,” said Northwestern’s John A. Rogers, who co-led the study. “As such, the materials disappear naturally after the healing process is complete, thereby avoiding any damage to the tissue that could otherwise be caused by physical extraction.”
file:///C:/Users/Owner/AppData/Local/Temp/msohtmlclip1/11/clip_image009.jpg" alt=" Ed Skin Patches Interest Fig4 645bc04de9613" width="500" height="414" />The joint team developed a small, flexible bandage that softly wraps around the injury site. One side of the smart regenerative system contains two electrodes: A tiny flower-shaped electrode that sits right on top of the wound bed and a ring-shaped electrode that sits on healthy tissue to surround the entire wound. The other side of the device contains an energy-harvesting coil to power the system and a near-field communication (NFC) system for real-time updates (Fig. 4)
Fig.4. Materials and designs of a bioresorbable, wireless, and battery-free electrotherapy system: (a) Schematic illustrations of a transient, wireless, battery-free system for electrotherapy mounted on a wound on the foot (left) and in an enlarged view (right) that highlights the different components. (b) Operational diagram of the entire system (RF, radio frequency; ISO, interconnection system operation; LDO, low-dropout regulator). (c) FEA results from the electric field between the positive (+) and negative (−) electrodes. Scale bar, 3 mm. (d) Schematic illustrations of the mode of use, device on a wound before (i) and after healing (ii), removed by cutting the traces to the anode (iii), partially bioresorbed after a period of therapy (iv), and fully bioresorbed (v). The semitransparent orange color represents the healed skin.
The team also included sensors that can assess how well the wound is healing. The device can be operated remotely without wires. This enables the physician to decide when to apply the electrical stimulation and monitor the wound’s healing progress.
By measuring the resistance of the electrical current across the wound, physicians are able to monitor progress. A gradual decrease in current measurement relates directly to the healing process. So, if the current remains high, then physicians know something is wrong.
In a study of animal models, the researchers applied electrical stimulation for just 30 min. a day. Even this short amount of time accelerated the closure by 30%. When the wound is healed, the flower-shaped, ultra-thin molybdenum electrode simply dissolves into the body, bypassing the need to retrieve it. Furthermore, it doesn’t interfere with the healing process.
The work is explored in their paper in Science Advances, “Bioresorbable, wireless, and battery-free system for electrotherapy and impedance sensing at wound sites,” which also includes the appended Supplemental Information. Check out this two-minute video:
Example #3: Sweat Sensor with Immediate Readout
Sweat is a “gold mine” of useful information about the body. Recognizing its virtues, engineers at the University of California San Diego developed a thin, flexible, and stretchy sweat sensor that can show the level of glucose, lactate, sodium, or pH of sweat, and do so at the press of a finger.
They claim it’s the first standalone wearable device that allows the sensor to operate independently—without any wired or wireless connection to external devices—to directly visualize the result of the measurement. The design of this small disk-shaped patch includes all of the essential components that are required for wearable sensors: two integrated batteries, a microcontroller, sensors, the circuit, and a stretchable display.
Fabrication of the device involves the formulation of nine types of different stretchable inks, which were used to print the batteries, circuits, display panels, and sensors. The device is printed layer-by-layer onto stretchable polymer sheets and then assembled with hydrogels and microcontroller chips into the complete device. Each ink was optimized to ensure its compatibility with other layers while balancing its electrical, chemical, and mechanical performance (Fig. 5). University of California San Diego
5. System overview of the all-printed skin-interfaced ECD sensing patch: (a) Exploded view detailing the individual layers of the epidermal patch. (b) System flowchart of the system and the zoomed-in view of the individual modules. (c) Operation of the patch. A photographic image demonstrates the patch used for epidermal sweat sensing by instantaneously revealing the target concentration (i). Illustration of the change in display that changes with the electrolyte concentration and readout of the potentiometric sensor (ii) and the intermittent discharge mode of the Ag2O–Zn battery that supplies power to the system (iii). (d) Photographic images demonstrating the mechanical performance and durability of the patch, including its bending (i) or stretching (ii) and stretching of the connection between the interconnect and MCU (iii).
Typical electrochromic displays require transparent glass panels with a conductive but brittle coating, which would not work for this device. Instead, the researchers turned to a special polymer called PEDOT:PSS, which is both conductive and has electrochromic properties.
The polymer changes from light sky blue to dark navy blue when applying a negative voltage; it turns back when applying a positive voltage. By tuning the ink formulation with PEDOT:PSS, it can be made both printable and stretchable—the patch can be stretched 20% repeatedly without affecting its performance (Fig. 6). University of California San Diego
6. The patch can be stretched 20% repeatedly without affecting its performance.
The researchers designed a display panel composed of 10 individual pixels, which is programmed to display the concentration of the chemicals by turning on different pixels. After optimizing the operation condition of the display, each pixel can be turned on and off reversibly over 10,000 cycles, more than sufficient for its week-long operation.
The pixels only take 500 ms to change color, during which time they consume just 80 µW of power on average. As it requires no power to maintain the displayed result, the display is very energy-efficient for its application.
Full details can be found in the Nature Electronics paper “A stretchable epidermal sweat sensing platform with an integrated printed battery and electrochromic display,” which also has appended Supplemental Information.
Example #4: Smart Bandage Promotes, Monitors Healing
A smart bandage developed at the California Institute of Technology (Caltech) may make treatment of wounds that will not go away—and ultimately become infected and fester—easier, more effective, and less expensive.
Unlike a typical bandage, which might only consist of layers of absorbent material, smart bandages are made from a flexible and stretchy polymer containing embedded electronics and medication. The electronics enable the sensor to monitor for molecules like uric acid (UA) or lactate and conditions like pH level or temperature in the wound that may be indicative of inflammation or bacterial infection (Fig. 7). Caltech
7. A wireless stretchable wearable bioelectronic system for multiplexed monitoring and treatment of chronic wounds: (A) Schematic of a soft wearable patch on an infected chronic nonhealing wound on a diabetic foot. (B) Schematic of layer assembly of the wearable patch, showing the soft and stretchable SEBS poly[styrene-b-(ethylene-co-butylene)-b-styrene] substrate, the custom-engineered electrochemical biosensor array, a pair of voltage-modulated electrodes for controlled drug release and electrical stimulation, and an anti-inflammatory and antimicrobial drug-loaded electroactive hydrogel layer. (C) Schematic layout of the smart patch consisting of a temperature (T) sensor, pH, ammonium (NH4+), glucose (Glu), lactate (Lac), and UA sensing electrodes; reference (Ref) and counter electrodes; and a pair of voltage-modulated electrodes for controlled drug release and electrical stimulation. (D, E) Photographs of the fingertip-sized stretchable and flexible wearable patch. Scale bars, 1 cm. (F, G) Schematic diagram (F) and photograph (G) of the fully integrated miniaturized wireless wearable patch. Scale bar, 1 cm. (ADC, analog-to-digital converter; AFE, analog front end; PSoC, programmable system-o- chip; MUX, multiplexer; BLE, Bluetooth Low Energy) (H) Photograph of a fully integrated wearable patch on a diabetic rat with an open wound. Scale bar, 2 cm.
The bandage can respond in one of three ways. First, it can transmit the gathered data from the wound wirelessly to a nearby computer, tablet, or smartphone for review by the patient or a medical professional. Second, it can deliver an antibiotic or other medication stored within the bandage directly to the wound site to treat the inflammation and infection. Third, it can apply a low-level electrical field to the wound to stimulate tissue growth, resulting in faster healing.
The wearable patch is mechanically flexible and stretchable. It can also conformally adhere to the skin wound throughout the entire wound-healing process, preventing any undesired discomfort or skin irritation. The patch monitors a panel of wound biomarkers, including temperature, pH, ammonium, glucose, lactate, and UA. They were chosen on the basis of their importance in reflecting the infection, metabolic and inflammatory status of chronic wounds.
The disposable wearable patch consists of a multimodal biosensor array for simultaneous and multiplexed electrochemical sensing of wound exudate biomarkers, a stimulus-responsive electroactive hydrogel loaded with a dual-function anti-inflammatory and antimicrobial peptide (AMP), and a pair of voltage-modulated electrodes for controlled drug release and electrical stimulation.
The multiplexed sensor array patch is fabricated via standard microfabrication protocols on a sacrificial layer of copper followed by transfer printing onto a SEBS (styrene-ethylene-butylene-styrene) thermoplastic elastomer substrate. The serpentine-like design of electronic interconnects, and the highly elastic nature of SEBS enable high stretchability and resilience of the sensor patch against undesirable physical deformations.
A four-layer flexible printed circuit board (FPCB) measuring 36.5 × 25.5 mm was used, with the sensor patch directly underneath the FPCB accessed through a rectangular cutout (12 × 3.8 mm). The power-management circuitry consists of a magnetic reed switch and a voltage. The electrical stimulation and drug-delivery circuitry used a voltage reference, an op-amp square-wave generator circuit, and a switch array. The potentiometric, amperometric, and temperature sensor interface circuitry consists of a voltage buffer array, switch array, voltage divider, and electrochemical analog front end.
A programmable system-on-chip (SoC) Bluetooth Low Energy (BLE) module was used for data processing and wireless communication. The fully integrated wearable device was attached to the mice or rats using 3M double-sided tape and fixed with liquid adhesive to enable strong adhesion, allowing the animals to move freely over a prolonged period.
The work is discussed in their paper “A stretchable wireless wearable bioelectronic system for multiplexed monitoring and combination treatment of infected chronic wounds” published in Nature Communications. The paper is hard to follow due to its stilted, formal tone and heavy skews toward biochemistry and medical implication rather than the fabrication and electronics. There’s also a 28-page Supplementary Materials file that provides some more details on the device construction and includes a schematic diagram and bill of materials.
This article appeared in Electronic Design, an affiliate publication of Machine Design.
- Details
- Category: Inside TRIZ
In War Against Industrial Corrosion,
Clean Lasers
Prove Very Effective
Clean technology lasers offer industrial corrosion removal in myriad applications.
Del Williams Aug 10, 2023
"
The technology minimizes operator exposure to potential environmental health hazards and no consumables are necessary." Laser Photonics
Industries have been fighting a war against corrosion in metal infrastructure, equipment, and products at great expense for generations.
The global cost of corrosion is estimated to be $2.5 trillion, which is equivalent to 3.4% of the global Gross Domestic Product (GDP) (2013), according to a NACE International IMPACT study to examine the current role of corrosion management in industry and government and to establish best practices.
Given the massive industrial outlay, proactively controlling corrosion is imperative and can have an equally impressive ROI.
“By using available corrosion control practices, it is estimated that savings of between 15 and 35% of the cost of corrosion could be realized, i.e., between US$375 and $875 billion annually on a global basis…The fact that corrosion control provides a cost-benefit is a lesson learned over and over again by industry, often too late and following catastrophic events,” continued the NACE International IMPACT study.
However, traditional methods of removing corrosion can be messy, laborious, time-consuming, and can even pose serious health hazards.
Today, one of the easiest to use and most effective alternatives in the war against corrosion is the increasingly important category of industrial-grade, clean technology lasers.
With this approach, precision laser-based systems are used to remove corrosion, contaminants, paint, and residues with a high-energy laser beam that leaves the substrate unaffected. Preparation and cleanup time are minimal, and the low-maintenance equipment can last decades. The technology minimizes operator exposure to potential environmental health hazards. In addition, no consumables are necessary.
"CleanTech laser systems can last for 50,000 to 100,000 hours with virtually no maintenance needed after purchase and no consumables required." Laser Photonics
Corrosion and the Limits of Conventional Control
Any industry with metal infrastructure, processing equipment, or products exposed to water, fluids, moisture, or atmospheric humidity continually fights corrosion, which causes the deterioration and loss of a material and its critical properties due to chemical, and electrochemical reactions of the exposed surface with the surrounding environment. Corrosion affects the microstructure, mechanical properties, and physical appearance of the materials.
The direct cost of corrosion includes a loss of materials, equipment, and production, plus the cost of repair, maintenance, and replacement. Additional losses can result from accidents, injuries, and even loss of life as well as payments to repair environmental damage.
Within the continual struggle against industrial corrosion, one important niche area of corrosion control involves the pretreating of metal surfaces to remove corrosion and contaminants before coating or welding.
Although metal surface pretreatment is a small portion of industrial corrosion control, it is crucial to ensure the safety, performance, and longevity of products and structures.
Insufficient coating pretreatment can lead to inadequate protection from the environment, leading to potential coating failure, moisture entry, and accelerated corrosion as well as increased maintenance, early replacement, and warranty issues. Similarly, insufficient weld pretreatment to remove corrosion and contaminants can lead to weakened or failed welds and necessary rework as well as substantial safety, liability, and litigation risk.
Clean technology lasers offer superior industrial corrosion removal in myriad applications, helping solve some of the industry’s most costly corrosion problems.
Clean technology lasers offer superior industrial corrosion removal in myriad applications, helping solve some of the industry’s most costly corrosion problems. Laser Photonics
A More Effective Weapon to Eliminate Corrosion
In many industries, it is necessary to remove corrosion, residue, oil, grease, or paint before coating a product or infrastructure to improve coating adhesion.
Toward this end, laser-based systems have significant advantages over traditional methods, starting with ease of use.
“With laser-based systems, an operator simply points and clicks a high-energy laser beam at the surface. The substrate is not affected by the laser and the systems do not create any mess or byproducts. The approach is eco-friendly and energy-efficient. It completes the job in approximately half the time of traditional methods when preparation and cleanup are considered. Also, no consumables are required,” said Wayne Tupuola, CEO, of Orlando, Florida-based Laser Photonics, a leading provider of patented industrial-grade CleanTech lasers for cleaning and surface conditioning. The company’s systems function either as mobile standalone units or can be integrated into production lines.
In the case of Laser Photonics, the laser systems are available in portable and stationary models ranging from 50 to 3,000 watts (a 4,000-watt version is in development) with chamber sizes from 3’ x 3’ in size to 6’ x 12’. The systems can also be installed in manufacturing lines in cabinets or operated by a robotic arm.
In industry, the laser pre-treatment of metal surfaces can be used to streamline various manufacturing processes. For instance, it has been used to remove rust from hundreds of automotive transmissions per day. It has also been utilized to eliminate corrosion from conveying system components.
The CleanTech lasers are also used to refurbish industrial infrastructure, such as when removing a previous coating along with any corrosion to facilitate the new coating’s adhesion to the surface.
Another common laser application involves pre-weld treatment to remove corrosion, mill scale, residue, and any impurities on the surface of the base material that would compromise the weld’s effectiveness. It is essential to avoid any such contamination on a weld’s surface, which could otherwise lead to a weakening of the weld’s mechanical properties, requiring rework.
Laser treatment is also used for post-weld cleaning to increase the life expectancy and corrosion resistance of a welded joint. Post-weld cleaning is important for stainless steel as well. Welding can cause a “heat tint,” a discolored, thickened top layer on the stainless steel around the weld bead within the heat-affected zone that compromises corrosion resistance. Removing the heat-tinted top layer is necessary to restore stainless steel’s full corrosion resistance and aesthetic value.
A further benefit of laser systems is that some of the most advanced units are designed to last for decades. For example, CleanTech laser systems can last for 50,000 to 100,000 hours. In addition, virtually no maintenance is needed after purchase and no consumables are required.
Given the devastating cost of corrosion to industry and the inherent limitations of typical control methods, lasers are becoming a best practice technique to combat it in facilities and in the field. Laser treatment effectively removes corrosion for many industrial applications, minimizes cleanup time and operator exposure to potential environmental health hazards, lasts for decades, and requires no consumables.
- Details
- Category: Inside TRIZ
Metal-Filtering Sponge Removes Lead from Water
The researchers tested their new sponge on a highly contaminated sample of tap water. May 11, 2023
Source: Caroline Harms/Northwestern University
Northwestern University engineers have developed a new sponge that can remove metals — including toxic heavy metals like lead and critical metals like cobalt — from contaminated water, leaving safe, drinkable water behind.
In proof-of-concept experiments, the researchers tested their new sponge on a highly contaminated sample of tap water, containing more than 1 part per million of lead. With one use, the sponge filtered lead to below detectable levels.
After using the sponge, researchers also were able to successfully recover metals and reuse the sponge for multiple cycles. The new sponge shows promise for future use as an inexpensive, easy-to-use tool in home water filters or large-scale environmental remediation efforts.
The study was published late yesterday (May 10) in the journal ACS ES&T Water. The paper outlines the new research and sets design rules for optimizing similar platforms for removing — and recovering — other heavy-metal toxins, including cadmium, arsenic, cobalt, and chromium.
“The presence of heavy metals in the water supply is an enormous public health challenge for the entire globe,” said Northwestern’s Vinayak Dravid, senior author of the study. “It is a gigaton problem that requires solutions that can be deployed easily, effectively, and inexpensively. That’s where our sponge comes in. It can remove the pollution and then be used again and again.”
Dravid is the Abraham Harris Professor of Materials Science and Engineering at Northwestern’s McCormick School of Engineering and director of global initiatives at the International Institute for Nanotechnology.
Sopping up spills
The project builds on Dravid’s previous work to develop highly porous sponges for various aspects of environmental remediation. In May 2020, his team unveiled a new sponge designed to clean up oil spills. The nanoparticle-coated sponge, which is now being commercialized by Northwestern spinoff MFNS Tech, offers a more efficient, economic, ecofriendly, and reusable alternative to current approaches to oil spills.
But Dravid knew it wasn’t enough. “When there is an oil spill, you can remove the oil,” he said. “But there also are toxic heavy metals — like mercury, cadmium, sulfur, and lead — in those spills. So, even when you remove the oil, some of the other toxins might remain.
Rinse and repeat
To tackle this aspect of the issue, Dravid’s team, again, turned to sponges coated with an ultrathin layer of nanoparticles. After testing many different types of nanoparticles, the team found that a manganese-doped goethite coating worked best. Not only are manganese-doped goethite nanoparticles inexpensive to make, easily available and nontoxic to humans, but they also have the properties necessary to selectively remediate heavy metals.
“You want a material with a high surface area, so there’s more room for the lead ions to stick to it,” said Benjamin Shindel, a Ph.D. student in Dravid’s lab and the paper’s first author. “These nanoparticles have high-surface areas and abundant reactive surface sites for adsorption and are stable, so they can be reused many times.”
The team synthesized slurries of manganese-doped goethite nanoparticles, as well as several other compositions of nanoparticles, and coated commercially available cellulose sponges with these slurries. Then, they rinsed the coated sponges with water in order to wash away any loose particles. The final coatings measured just tens of nanometers in thickness.
When submerged into contaminated water, the nanoparticle-coated sponge effectively sequested lead ions. The U.S. Food and Drug Administration requires that bottled drinking water is below 5 parts per billion of lead. In filtration trials, the sponge lowered the amount of lead to approximately 2 parts per billion, making it safe to drink.
“We’re really happy with that,” Shindel said. “Of course, this performance can vary based on several factors. For instance, if you have a large sponge in a tiny volume of water, it will perform better than a tiny sponge in a huge lake.”
Recovery bypasses mining
From there, the team rinsed the sponge with mildly acidified water, which Shindel likened to “having the same acidity of lemonade.” The acidic solution caused the sponge to release the lead ions and be ready for another use. Although the sponge’s performance declined after the first use, it still recovered more than 90% of the ions during subsequent use cycles.
This ability to gather and then recover heavy metals is particularly valuable for removing rare, critical metals, such as cobalt, from water sources. A common ingredient in lithium-ion batteries, cobalt is energetically expensive to mine and accompanied by a laundry list of environmental and human costs.
If researchers could develop a sponge that selectively removes rare metals, including cobalt, from water, then those metals could be recycled into products like batteries.
“For renewable energy technologies, like batteries and fuel cells, there is a need for metal recovery,” Dravid said. “Otherwise, there is not enough cobalt in the world for the growing number of batteries. We must find ways to recover metals from very diluted solutions. Otherwise, it becomes poisonous and toxic, just sitting there in the water. We might as well make something valuable with it.”
Standardized scale
As a part of the study, Dravid and his team set new design rules to help others develop tools to target particular metals, including cobalt. Specifically, they pinpointed which low-cost and nontoxic nanoparticles also have high-surface areas and affinities for sticking to metal ions. They studied the performance of coatings of manganese, iron, aluminum, and zinc oxides on lead adsorption. Then, they established relationships between the structures of these nanoparticles and their adsorptive properties.
Called Nanomaterial Sponge Coatings for Heavy Metals (or “Nano-SCHeMe”), the environmental remediation platform can help other researchers differentiate which nanomaterials are best suited for particular applications.
“I’ve read a lot of literature that compares different coatings and adsorbents,” said Caroline Harms, an undergraduate student in Dravid’s lab and paper co-author. “There really is a lack of standardization in the field. By analyzing different types of nanoparticles, we developed a comparative scale that actually works for all of them. It could have a lot of implications in moving the field forward.”
Dravid and his team imagine that their sponge could be used in commercial water filters, for environmental clean-up or as an added step in water reclamation and treatment facilities.
“This work may be pertinent to water quality issues both locally and globally,” Shindel said. “We want to see this out in the world, where it can make a real impact.”
TRIZ: The use of Principles #24, Mediator and #31, Porous Materials, to filter out harmful contaminants (heavy-metals) from polluted water. Ideal Final Results is to have a robust filtering system that removes harmful contaminants and remains in continuous use, Principle #20. Also, recapturing the harmful contaminants and recycling them will generate a cashflow is Principle #22 and keep the contaminants from being dumped into the environment.
- Details
- Category: Inside TRIZ
Unmanned Aerial Vehicle is Based on da Vinci Aerial Screw Design
Crimson Spin, a small, unmanned aerial vehicle (UAV) — flies through the combined lift of four, whirring red spiral-shaped blades. (Image: UMD Design Team)
In the fifteenth century, artist and engineer Leonardo da Vinci envisioned a craft that flew using a single helix-shaped propeller — the aerial screw — viewed by many as the first vertical take-off and landing (VTOL) machine ever designed.
In 2020, the Vertical Flight Society’s (VFS) 37th Annual Student Design Competition challenged students from across the world to reimagine da Vinci’s design. Using modern-day analytical and design tools, students were tasked to design and demonstrate a feasible modern-day VTOL vehicle based on the aerial screw concept and demonstrate the consistency of its physics.
Unveiled at the Vertical Flight Society’s 2022 Transformative Vertical Flight Conference — Crimson Spin, a small, unmanned aerial vehicle (UAV) — flies through the combined lift of four whirring red spiral-shaped blades.
The craft was the culmination of more than two years’ worth of work stemming from UMD’s 2020 winning graduate entry. The prize-winning design, named Elico, derived its name from the Italian root for the words “helicopter,” “propeller,” “helix,” and “screw,” all rooted in Leonardo’s drawing of the aerial screw.
The design did in fact look functional — on paper and in computer simulations — but could it actually fly in reality? “We saw some really interesting behaviors in the lift mechanisms of the air screw in our computational fluid dynamics simulations and models, where we found an edge vortex that would form,” explained team member Ilya Semenov. “But with such a novel design, we couldn’t be 100 percent sure that the phenomenon was true, so creating a working model would help us validate if it was in fact happening.”
Developed as a technology demonstrator, Crimson Spin was designed as a fully autonomous, manned quadrotor vehicle, and improves on da Vinci’s design by using a tapered aerial screw rotor to provide all lift, thrust, and control of the vehicle. A modular framework allows Crimson Spin to adapt to changing mission requirements and has hover and forward flight capabilities. According to the team, it allows riders to experience the genius of Leonardo da Vinci first-hand, safely and easily, by using an all-electric power plant, ultralight composite airframe, and pushbutton operation.
“That first successful flight was an incredible moment,” said team member Austin Prete. “It took three months, just trying to get it to fly correctly.”
While the research and findings are far too preliminary to extract potential applications at this point, making a working model based on the design has been a success in and of itself.
“Just the way the air screw worked was surprising,” added James Sutherland, Ph.D. candidate and team captain of the 2020 design team. “It’s possible that the aerial screw might be less noisy or create less downwash than a regular rotor with the same amount of thrust, but there is still a lot to learn and study before we know where it could be applied.”
Prete added that another interesting finding of the design was that the screws can create the same amount of lift but with fewer rotations compared to a traditional rotor, which may contribute to the reduced downwash — a not insignificant issue when flying traditional rotorcrafts.
Since the technology is so new, the team agrees that characterizing the rotors is important so future work can be done to evaluate the aerial screw against existing rotor styles. That will help them to understand in what flight regimes the screw design might perform best.
“For example, can you make an adjustable rotor screw?” said Prete. “You can make adjustments to a traditional rotor inflight, so what adjustments could you make to an aerial screw mid-flight to change its performance?”
- Details
- Category: Inside TRIZ
MRI FOR ALL
by ADRIAN CHO 23 FEB 2023
Hyperfine’s Swoop, the first U.S. FDA–approved low-field brain scanner, can be
wheeled to a patient’s bedside. DEREK DUDEK/HYPERFINE
The patient, a man in his 70s with a shock of silver hair, lies in the neuro intensive care unit (neuro ICU) at Yale New Haven Hospital. Looking at him, you’d never know that a few days earlier a tumor was removed from his pituitary gland. The operation didn’t leave a mark because, as is standard, surgeons reached the tumor through his nose. He chats cheerfully with a pair of research associates who have come to check his progress with a new and potentially revolutionary device they are testing.
The cylindrical machine stands chest high and could be the brooding older brother of R2D2, the Star Wars robot. One of the researchers carefully guides the 630-kilogram self-propelled scanner up to the head of the bed, steering it with a joystick. Lifting the man by his bed sheet, the researchers help him ease his head into the Swoop—a portable magnetic resonance imaging (MRI) scanner made by a company called Hyperfine.
“Do you want ear plugs?” asks Vineetha Yadlapalli, the second researcher.
“Is it as loud as a regular MRI?”
“Not at all.”
“Then I guess I don’t need them.”
After propping up the patient’s legs to ease the strain on his back, Yadlapalli sets the machine to work, tapping in a few instructions from an iPad. The machine emits a low growl, then proceeds to beep and click. Within minutes, an image of the patient’s brain appears on Yadlapalli’s tablet.
For a half-hour, the man lies quietly, hands folded across his belly. He could be getting his hair set in an old-fashioned hair dryer. In a small way, he’s a pioneer helping take MRI where it’s never gone before.
In many cases, MRI sets the gold standard in medical imaging. The first useful MRI images emerged in the late 1970s. Within a decade, commercial scanners had spread through medicine, enabling physicians to image not just bone, but soft tissues. If doctors suspect you have had a stroke, developed a tumor, or torn cartilage in your knee, they’ll likely prescribe an MRI.
If you’re fortunate enough to be able to get one, that is. An MRI scanner employs a magnetic field to twirl atomic nuclei in living tissue—specifically the protons at the heart of hydrogen atoms—so that they emit radio waves. To generate the field, a standard scanner employs a large, powerful superconducting electromagnet that pushes a machine’s cost to $1.5 million or more, pricing MRI out of reach of 70% of the world’s population. Even in the United States, getting an MRI may require days of waiting and a midnight drive to some distant hospital. The patient must come to the scanner, not the other way around.
For years, some researchers have been striving to build scanners that use much smaller permanent magnets, made of the alloy often found in desk toys. They produce fields roughly 1/25th as strong as a standard MRI magnet, which once would have been far too weak to glean a usable image. But, thanks to better electronics, more efficient data collection, and new signal processing techniques, multiple groups have imaged the brain in such low fields—albeit with lower resolution than standard MRI. The result is scanners small enough to roll to a patient’s bed and possibly cheap enough to make MRI accessible across the globe.
The resolution of a brain scan from a low-field machine (first image) is coarser than conventional MRI (second image), but both images clearly reveal a hemorrhage. YALE SCHOOL OF MEDICINE
The machines mark a technological triumph. Kathryn Keenan, a biomedical engineer at the National Institute of Standards and Technology who is testing a Hyperfine scanner, says, “Everyone that comes through is super impressed that it even works.” Some say the scanners could also transform medical imaging. “We’re potentially opening up a whole new field,” says Kevin Sheth, a neurologist at the Yale School of Medicine who has worked extensively with the Swoop but has no financial interest in Hyperfine. “It’s not a question of ‘Is this going to happen?’ It’s going to be a thing.”
In August 2020, the Swoop became the first low-field scanner to win U.S. Food and Drug Administration (FDA) approval to image the brain, and physicians are putting it through clinical studies at Yale New Haven and elsewhere. Other devices are close behind. But Andrew McDowell, a physicist and founder of the consulting firm NeuvoMR, LLC, cautions it’s not clear there’s a market for a low-field scanner, with its lower resolution. “The real challenge is going to be convincing doctors to start using it,” he says. “That’s very difficult because for good reasons they’re very conservative.”
AN MRI SCANNER WORKS nothing like a camera; it is actually a radio that tunes in to protons in living tissue. Like a tiny compass needle, each proton is magnetic, and ordinarily the protons point randomly in all directions (see graphic, below). However, an external magnetic field can align them. At that point, a pulse of radio waves of the right frequency and duration can tip them by 90°. The aligned protons then twirl like gyroscopes, emitting a radio signal of their own, whose frequency increases with the field’s strength.
That fleeting monotone radio hum reveals little. To create an image, the scanner must distinguish among waves coming from different points in the body. To do this, it sculpts the magnetic field, which makes protons at different locations sing at different frequencies and states of synchrony. The scanner must also distinguish one type of tissue from another, which it does by exploiting the fact that the radio signals fade at different rates in different tissues.
One reason the signal dies out is that the protons knock one another out of alignment through their own magnetic fields. The rate at which this happens differs between, say, fatty brain matter and watery cerebrospinal fluid. To measure the rate, the scanner applies pairs of pulses. The first pulse creates a signal that fades as the orientations of twirling protons fan out. The second reverses much of that evolution, eliciting an echo of the signal. The proton-proton interactions mute that echo, however. So the scanner can measure their rate by tracking how the echo shrinks as the delay between the two pulses increases.
While applying pair after pair of pulses, the scanner must simultaneously sort the echoes coming from different points in the brain. To do that, it relies on magnetic field gradients applied at key moments. For example, a gradient applied during the echo from chin to crown makes protons in different sideways slices through the head radiate at different frequencies. A gradient applied between pulses and across the head will set protons in vertical slices ahead or behind in their twirling, a “phase” difference that makes echoes from some slices reinforce one another and others cancel. By varying the gradient, the scanner can deduce the strength of the echo from each slice.
Over many repetitions, the scanner gathers a plethora of echoes in which intensity varies with delay, frequency, and phase. A standard mathematical algorithm decodes them to produce a map of how the proton-proton interactions vary throughout the brain, forming one type of MRI image. Other pulse sequences probe other tissue-specific processes—such as how fast protons diffuse, which can track fluid flow.
All that pulsing explains why MRI scans take time and why an MRI machine chirps, clicks, and buzzes. Those sounds emerge as mechanical stresses rattle the current-carrying coils that create the magnetic gradients. A technician can tell what kind of a scan a machine is doing just from those sounds, Yadlapalli says.
A stronger field makes all this easier by polarizing the protons more thoroughly and creating a bigger signal. A standard scanner’s magnet produces a field of 1.5 tesla—30,000 times as strong as Earth’s field—and some reach 3 or 7 tesla. Even so, the protons pointing along a 1.5-tesla field outnumber those pointing the other way by just 0.001%. Reduce the field strength by a factor of 25 and the polarization falls with it. The signal-to-noise ratio plummets even more, by a factor of nearly 300.
In principle, a low-field scanner could coax a signal from the noise by taking data over a longer time period—just as radio astronomers sift a weak signal from noise by training their dishes on a star for hours or days. That tack won’t work with a human, who can hold still only so long. So, in developing low-field MRI, researchers had to find ways to extract data much faster.
One key element is better hardware, says Joshua Harper, a neural engineer at the German Paraguayan University. “We now have really fast, really cheap electronics,” he says. “That’s really why it works.” Even so, doing low-field MRI in a hospital room is tricky. Metal in other machines and even the walls can distort the field, and static from other devices can disrupt the radio signal. So, scanners employ countermeasures. For example, Hyperfine’ s Swoop uses antennas to measure radio noise and cancel it, similar to how noise-canceling headphones block sound.
The new scanners also turn an aspect of the lower field to their advantage to run faster. To manipulate the protons, a high-field scanner must use higher frequency, higher energy radio waves, so it can pulse only so fast before it begins to heat the patient. Free of that speed limit, a low-field scanner can pulse faster and use more efficient pulse sequences, says Matthew Rosen of Massachusetts General Hospital, a physicist who co-founded Hyperfine. “We can interrogate very, very rapidly, doing things that you could never do at high field.”
Even so, gathering data fast enough for standard image reconstruction remains a challenge. One solution is to employ novel signal processing techniques, including artificial intelligence. Hyperfine engineers use a set of training images to teach a program called a neural network to construct brain images from relatively sparse data, says Khan Siddiqui, Hyperfine’ s chief medical officer and chief strategy officer. “That’s where our secret sauce comes in.”
Compared with a standard scan, a low-field image looks blurrier. Still, physicists see its beauty. “It’s this incredible physics success story,” Rosen says. “It’s not just we pointy headed physicists [goofing] off and doing stuff that nobody cares about.” The technology vindicates those toiling in a forgotten corner of the field, McDowell says. “Who in their right mind would build a 65-millitesla machine when the glory is in building the 11-tesla one?”
HYPERFINE SAYS ITS SWOOP SCANNER is off to a pretty glorious start. It has sold more than 100 of the machines, mostly in the United States, at about $250,000 apiece. The goal is not to replace high-field scanners, but to expand how MRI is used, Siddiqui says. “Our portable scanner brings MRI closer to the patient, both in time and in distance.” Hyperfine envisions using it in the neuro ICU to quickly assess patients too ill or unstable to wheel to a conventional MRI or a CT machine, which produces a type of 3D x-ray.
A Swoop’s magnet consists of two disks and produces a field of 64 millitesla. A scan from it feels dramatically different from a standard scan. In a conventional scanner, an automated table glides you bodily into the cylindrical magnet. With the Swoop, an able patient can scooch into the magnet as if wriggling under a car’s bumper. A helmetlike head piece containing the antennas cradles your head so snugly it may touch your nose, yet your arms and legs are free. The machine’s chirping is soft, even soothing.
In late 2019 and early 2020, as the coronavirus pandemic took hold, Sheth and colleagues tested the Swoop’s promise by scanning 50 ICU patients, including 20 with COVID-19. Because many were on ventilators and sedated, “we had no idea what their neurological status was and no way to take a look by any available imaging modality,” Sheth recalls. “And this provided us a way to do that at the bedside.” The scans revealed brain trauma in 37 cases, including eight COVID-19 patients, the researchers reported in January 2021 in JAMA Neurology.
A low-field MRI scanner images a patient in their bed in the intensive care unit at Yale New Haven
Hospital. YALE SCHOOL OF MEDICINE
The cheaper, smaller machines might also allow patients to get more frequent follow-up scans. That’s a prospect that resonates with Ronald Walsworth, a physicist at the University of Maryland, College Park, and co-founder of Hyperfine. In 2007, his then–2-year-old son developed a noncancerous brain tumor. He was treated successfully, says Walsworth, who serves on Hyperfine’s advisory board. Still, he says, “There were signs that were not caught early and things that were not decided most efficiently because the MRIs only could happen once in a while.”
The Swoop’s advantages have won it fans. “Oh, my God, what a beautiful, beautiful piece of technology,” says Steven Schiff, a pediatric neurosurgeon at Yale University who has no financial interest in Hyperfine. Still, the Swoop can miss details a high-field scanner would catch because its resolution of 1.5 millimeters is half that of a standard scanner. For example, Sheth’s team used it to image the brains of 50 patients who had had an ischemic stroke, visible with standard MRI. The Swoop missed the five smallest, millimeter-size strokes, the researchers reported in April 2022 in Science Advances.
That finding shows physicians will have to exercise judgment in deciding when to use each type of scanner, Sheth says. “You shouldn’t be too worried, but you should understand the context in which you might miss something,” he says. Still, McDowell notes doctors may shy away from a low-field scanner if they think using it could leave them open to a malpractice suit.
IN MUCH OF THE WORLD, MRI is simply unavailable. A team in the Netherlands hopes its scanner will change that. Its magnet differs dramatically from the Swoop’s. It consists of 4098 cubes of neodymium iron boron—an alloy developed in the 1980s by carmakers—embedded in a hollow plastic cylinder, and arranged in a configuration called a Halbach array to produce a uniform horizontal field. “Our system is intrinsically better and has fewer distortions,” asserts Andrew Webb, an MRI physicist at Leiden University Medical Centre, so it requires less help from processing such as machine learning.
A private company, Multiwave Technologies in Switzerland, is trying to bring the scanner to market. It will apply for FDA approval this year and aims to rent its machines in a subscription model, says Tryfon Antonakakis, Multiwave’s co-CEO. “Our goal is to make it as affordable as possible and not necessarily to be in the hospital,” says Antonakakis, an engineer and applied mathematician. “We are looking to go into the mountains, into the medical deserts in developing countries.”
Webb and his colleagues, including Martin van Gijzen, an applied mathematician at the Delft University of Technology, have another plan for spreading their technology: giving it away. “We made the decision—Martin, myself, all our team—that we were not going to patent things,” Webb says. “Everything is going to be open source,” so that anyone can download their design from the internet and build scanners. Webb and colleagues hope entrepreneurs in developing nations will manufacture them locally.
To seed the idea, they shipped a scanner, packaged as a kit, to Johnes Obungoloch, a biomedical engineer at Mbarara University of Science and Technology in Uganda, who was a graduate student at Pennsylvania State University, University Park, when Webb and Schiff were also there. In September 2022, Webb and others flew to Uganda to help Obungoloch and his team assemble the scanner in 11 days.
Soon it will be put to use in a project to test the utility of low-field MRI in the developing world. The CURE Children’s Hospital of Uganda, a 55-bed pediatric neurosurgical facility in Mbale run by an international nonprofit, plans to compare Obugoloch’s scanner, a Swoop, and a CT scanner. Doctors will image children with hydrocephalus, in which cerebrospinal fluid collects in the brain and compresses it, potentially causing debilitating or fatal damage. Globally, hydrocephalus afflicts 400,000 children every year, and it accounts for 75% of the CURE hospital’s patients. In Africa, an infection is the usual cause..For years, Schiff and colleagues at the hospital have used CT scans to guide an innovative surgery that allows the fluid to drain into the brain’s ventricles—an alternative to installing a shunt to the abdomen. However, a CT scan exposes children to considerable x-ray radiation, so CURE doctors will see whether low-field MRI images can guide surgeons. “If the MRI proves comparable to the CT scan, then there is no reason why we should be using the CT scan anymore,” says Ronald Mulondo, a physician at CURE who directs the project.
The study is awaiting final governmental approval. If it’s successful, Obungoloch envisions building more scanners, perhaps for Africa’s six other CURE hospitals, and even sourcing some of the parts locally. Uganda has public health care, so that vision depends on government funding, he says.
Still, like their peers elsewhere, doctors in Uganda may have reservations about the technique’s limited resolution, Obungoloch notes. “The radiologists see it and say, ‘Well, this is a crappy image, and we don’t care how long it took you to acquire it.’” Government officials may also think Ugandans shouldn’t have to settle for lower resolution imaging, no matter how useful, he says.
In truth, developers of low-field MRI are pushing for nothing less than a rethink of medical imaging. “Is the best technology the scanner that can provide the highest quality images or is it the scanner that can lead to the most improved patient outcomes?” asks Harper, who collaborated on Webb’s open-source rig and hopes to acquire a Swoop.
What will win over doctors, Sheth says, will be a “use case”—a killer app for the scanners. For example, they might be put into special ambulances for stroke care. He questions whether Hyperfine and others have found that use case but predicts it will come.
file:///C:/Users/Owner/AppData/Local/Temp/msohtmlclip1/01/clip_image017.jpg" alt="Adrian Cho" width="86" height="120" />Then there are patients to win over. After his time in the Hyperfine scanner, the pituitary tumor patient confides to Yadlapalli that it wasn’t quite as comfortable as a regular MRI. Noting that he still can’t breathe through his nose because of the surgery, he says the snug-fitting head basket bothered him. “I’d rather be scooted over to a real MRI.” Call him a reluctant pioneer.
ABOUT THE AUTHOR
Adrian Cho, Author, Staff Writer
TRIZ Notes: This new MRI system has been reduced in size, weight, do it in reverse and convenient to use. The system is more IDEAL and will continue to evolve closer to the IFR. Just needs a few more “S” curves.
- Details
- Category: Inside TRIZ
Innovation for Urgency:
Understanding the Different Types of Innovations
by admin | Mar 10, 2023 |Innovation
IntroductionInnovation is a key driver of progress and success in any industry. However, not all innovations are created equal. Innovation can be classified in terms of impact or urgency. Impact-driven innovation is motivated by the desire to make more financial profits; impact is not necessarily seen as an impact for a better world. Urgency-driven innovation is driven by the need to address a problem that has pressing aspects on human quality of life; here impact is seen from the perspective of quality of life in general, including the positive impact on flora and fauna. Both types of innovation are essential for different industries and situations. Some innovations focus on addressing progress and superior performance, while others are designed to stop or control negative effects. Additionally, some innovations go beyond control and aim to eliminate problems altogether. In this post, we will explore the different types of innovations and their impact on industries.
Classification based on urgency for quality of lifeClassification based on urgency for quality of life
One type of innovation is the “vitamin” type. This innovation focuses on addressing progress and superior performance. It involves creating products or services that improve an existing process or provide a better experience for the user. This type of innovation is essential for industries that want to stay competitive and continue to grow. Examples of vitamin-type innovations include smartphones, high-performance vehicles, and advanced medical equipment.Some more examples in this category are:
• The introduction of smartphones with new features, such as higher resolution cameras and more powerful processors, improves the performance of previous models.
• The introduction of smartphones with new features, such as higher resolution cameras and more powerful processors, improves the performance of previous models.
• The development of new materials, such as graphene, that offer improved mechanical and electrical properties compared to traditional materials.
• The creation of more efficient solar cells that generate more electricity per unit area.
• Apple’s Air Pods Pro with noise-canceling technology enhances the user experience.
• Tesla’s electric cars offer superior performance compared to traditional combustion engine cars (YES! It is only a vitamin-type innovation. Why? Please, have a look here)
• Google’s search algorithm updates improve the accuracy and relevance of search results.
Another type of innovation is the “analgesic” type. This innovation focuses on stopping or keeping some negative effects under control. It involves creating products or services that address a problem or limitation of an existing product. This type of innovation is crucial for industries that want to prevent negative effects from impacting their business. Examples of analgesic-type innovations include cybersecurity software, pollution control systems, and safety equipment.
Examples in this category are also the followings:
• The development of vaccines and treatments for diseases, such as the COVID-19 vaccines, that help to mitigate the negative effects of the illness.
• The introduction of filters in cigarettes to reduce the amount of harmful chemicals that are released when smoking.
• The use of pollution control devices in factories to reduce the amount of harmful emissions released into the environment.
• Robots that collect waste from lakes and rivers to reduce the amount of harmful effects on environment.
• Robots that collect waste from lakes and rivers to reduce the amount of harmful effects on environment.
• Lifting car-parking that can park 20 cars on the surface covered by 3 cars.
• Water filters that remove harmful contaminants from drinking water, keeping people healthy and safe.
• Noise-cancelling headphones that reduce stress and fatigue caused by loud noises.
• Apps that limit screen time to help prevent digital addiction and associated negative effects.
The “antibiotic” type of innovation takes things to the next level. This type of innovation aims to eliminate problems or negative effects altogether. It involves creating products or services that fundamentally change the way things are done, often creating entirely new industries. This type of innovation is essential for industries that want to stay ahead of the curve and address major problems in a game-changing way. Examples of antibiotic-type innovations include renewable energy sources, autonomous vehicles, and artificial intelligence.The “antibiotic” type of innovation takes things to the next level. This type of innovation aims to eliminate problems or negative effects altogether. It involves creating products or services that fundamentally change the way things are done, often creating entirely new industries. This type of innovation is essential for industries that want to stay ahead of the curve and address major problems in a game-changing way. Examples of antibiotic-type innovations include renewable energy sources, autonomous vehicles, and artificial intelligence.Examples we can add here are also:
• The development of antibiotics to treat bacterial infections, which have helped to reduce mortality rates from infectious diseases.
• The introduction of water treatment technologies, such as chlorination and filtration, that have helped to eliminate waterborne diseases in developed countries.
• The creation of renewable energy sources, such as wind and solar power, that help to eliminate reliance on fossil fuels and reduce carbon emissions.
• Recycling technology that can convert waste materials into usable products is an “antibiotic” type innovation in the waste management industry. It aims to eliminate the negative effects of waste, including environmental pollution and resource depletion.
• Tesla Powerwall – addresses the elimination of energy problems by providing an efficient and sustainable energy storage solution for households and businesses.
• Zero Waste manufacturing – addresses the elimination of waste problems by designing manufacturing processes that eliminate waste, increase efficiency, and reduce environmental impact.
• Automated Inventory Management Systems – addresses the elimination of inventory problems by using sensors, machine learning, and analytics to optimize inventory levels, reduce waste, and improve supply chain efficiency.
• Carbon Capture Technology – addresses the elimination of carbon emissions by capturing and storing carbon dioxide emissions from industrial processes before they are released into the atmosphere.
All three categories of innovations have the potential to be of interest to investors, depending on the specific context and industry.All three categories of innovations have the potential to be of interest to investors, depending on the specific context and industry.
Vitamin-type innovations, which focus on progress and superior performance, may be particularly appealing to investors looking for growth opportunities and high returns on investment.
Analgesic-type innovations, which address negative effects or problems that already exist, may also be attractive to investors if they offer cost savings or other benefits to customers.
Antibiotic-type innovations, which aim to eliminate problems or negative effects, may be less immediately attractive to investors because they often require significant investment and may take longer to generate returns. However, these types of innovations may be more valuable in the long term as they can create new markets or disrupt existing ones and may also have more significant social and environmental impacts.
Conclusions
Understanding the different types of innovations and their potential impact can help individuals and organizations prioritize their efforts and investments. While all types of innovations have value, it is important to recognize that analgesic and antibiotic innovations have a significant impact on society and can offer attractive investment opportunities. These types of innovations may not receive the same level of attention as vitamin-type innovations because of much higher research and development risks, but they can provide more stable and predictable returns on investment over the long term. Usually in hackathons we see vitamin-type innovations, which are not necessarily the stringent ones for humankind. We must address this issue and find new models to support ideation and development around analgesic and antibiotic-types of innovations. It is worth considering how these different types of innovations are addressed in different industries and sectors. Furthermore, it is important to consider the potential trade-offs and unintended consequences of each type of innovation. For example, an impact-driven innovation may generate significant financial returns, but it may also have negative societal or environmental impacts. Similarly, an analgesic-type innovation or even an antibiotic-type innovation may solve a particular problem, but it may also create new problems or unintended consequences. So, we have to analyze any type of innovation in a holistic way and from multiple angles, over their life-cycles, over the supply and value chains, etc.Innovation is a complex and multifaceted process, and there is no one-size-fits-all approach to addressing the challenges and opportunities that it presents. By understanding the different types of innovations and their potential impact, individuals and organizations can make more informed decisions about where to focus their efforts and investments, and how to balance short-term and long-term goals.
—————
Credits: Stelian Brad
- Details
- Category: Inside TRIZ
Keeping Drivers Safe with a Road That Can Melt Snow, Ice on Its Own
The researchers prepared a sodium acetate salt and combined it with a surfactant, silicon dioxide, sodium bicarbonate and blast furnace slag.
American Chemical Society Feb 17, 2023
Adapted from ACS Omega 2023, DOI: 10.1021/acsomega.2c07212
Slipping and sliding on snowy or icy roads is dangerous. Salt and sand help melt ice or provide traction, but excessive use is bad for the environment. And sometimes, a surprise storm can blow through before these materials can be applied. Now, researchers reporting in ACS Omega have filled microcapsules with a chloride-free salt mixture that’s added into asphalt before roads are paved, providing long-term snow melting capabilities in a real-world test.
Driving on snowy roads at or near-freezing temperatures can create unsafe conditions, forming nearly invisible, slick black ice, if roads aren’t cleaned quickly enough. But the most common ways to keep roads clear have significant downsides:
- Regular plowing requires costly equipment, is labor intensive and can damage pavement.
- Heavy salt or sand applications can harm the environment.
- Heated pavement technologies are prohibitively expensive to use on long roadways.
Recently, researchers have incorporated salt-storage systems into “anti-icing asphalt” to remove snow and prevent black ice from forming. However, these asphalt pavements use corrosive chloride-based salts and only release snow-melting substances for a few years. So, Yarong Peng, Quansheng Zhao, Xiaomeng Chu and colleagues wanted to develop a longer-term, chloride-free additive to effectively melt and remove snow cover on winter roads.
The researchers prepared a sodium acetate salt and combined it with a surfactant, silicon dioxide, sodium bicarbonate and blast furnace slag — a waste product from power plant operations — to produce a fine powder. They then coated the particles in the powder with a polymer solution, forming tiny microcapsules. Next, the team replaced some of the mineral filler in an asphalt mixture with the microcapsules.
In initial experiments, a pavement block made with the new additive lowered the freezing point of water to -6 F. And the researchers estimated that a 5-cm-thick layer of the anti-icing asphalt would be effective at melting snow for seven to eight years. A real-world pilot test of the anti-icing asphalt on the off-ramp of a highway showed that it melted snow that fell on the road, whereas traditional pavement required additional removal operations. Because the additive used waste products and could release salt for most of a road’s lifetime, the researchers say that is a practical and economic solution for wintertime snow and ice removal.
TRIZ: Slippery, icy roads are dangerous to drive on. The Ideal Final Result concept tells us that the system should fix itself using resources that are readily available and free/low cost. The main component of the road system is the road, more accurately, the black top. The road surface should be able to melt/remove snow and ice. By merging (Prin #5) a low cost additive to the black top, we should be able to provide an environment where the road itself will melt the snow and ice.
- Details
- Category: Inside TRIZ