Trashfence Catches a Garbage Tsunami Heading Downriver    IEN Logo.476

Ben Munson, Unit,   202 Productions   Eric Sorensen                         Jul 28, 2022

Fence River for Plastic.149

Trash is everywhere. It’s on land, where it sometimes piles so high it becomes a smoldering mountain. It’s in space, where it breaks down differently while moving at blistering speeds and battered. And it’s in our water, which sometimes forms huge garbage waves.

The Ocean Cleanup, a nonprofit environmental engineering organization, has identified one river where the trash waves seem to be continually breaking. The group said the Rio Motagua basin in Guatemala is particularly messy, sending an estimated 20,000 tons of plastic into the Caribbean Sea annually. That means this one river is responsible for 2% of the plastic that enters the oceans worldwide.

To push back against the rush of rubbish, Ocean Cleanup has developed the eight-meter-tall Interceptor Trashfence. It looks like a significant metal volleyball net, and it’s designed to contain trash upstream before it can hit the ocean and disperse. It’s an upgraded version of the Interceptor Original, whose 10,000-kilogram capacity wouldn’t hold up long against the annual Motagua floods.

The Interceptor Trashfence uses technology standards in avalanche and landslide protection systems. With the reinforced model on the riverbed, the idea is to catch the trash and then wait for water levels to recede before moving the garbage pile with excavators.

In a video showing the Interceptor Trashfence in action earlier this year, a shocking amount of garbage rages down the Rio Motagua before the fence traps it. Unfortunately, the Trashfence eventually springs a leak because the river’s force erodes the river base below the fence. Fortunately, Ocean Cleanup engineers don’t seem discouraged.

For now, the organization is optimizing the design and figuring out just how many Trashfences it will take to keep all or most of the plastic from making it to the sea.

TRIZ Application: To clean up the ever-growing problem of plastic trash pollution in our oceans, the first step is that we need to stop the influx of plastic trash. Using TRIZ Principle # 10 Prior Action, we can perform this task by blocking the trash on rivers with an "Interceptor Trashfence." It is easier to recover this trash before it is dispersed into the ocean.

 

July 21, 2022Industry Week.694

With bans as early as 2025, BASF and MIT are searching for a biodegradable substitute.

MIT News

Cotton Fibers.094

file:///C:/Users/Owner/AppData/Local/Temp/msohtmlclip1/01/clip_image002.png" width="624" height="287" />

Microplastics, tiny particles of plastic now found worldwide in the air, water, and soil, are increasingly recognized as a severe pollution threat and are now found in the bloodstreams of animals and people. Some microplastics are intentionally added to various products, including agricultural chemicals, paints, cosmetics, and detergents. According to the European Chemicals Agency, this substance amounts to an estimated 50,000 tons annually in the European Union alone. The EU has already declared that these added, nonbiodegradable microplastics must be eliminated by 2025. So the search is on for suitable replacements, which do not currently exist.

Now, a chemical company BASF and the Massachusetts Institute of Technology scientists have developed a silk-based system that could provide an inexpensive and easily manufactured substitute. The new process is described in a paper in the journal Small.

The microplastics widely used in industrial products generally protect some specific active ingredient (or ingredients) from being degraded by exposure to air or moisture until they are needed. They provide a slow release of the active ingredient for a targeted period and minimize adverse effects to its surroundings. For example, vitamins are often delivered as microcapsules packed into a pill or capsule, and pesticides and herbicides are similarly enveloped. But the materials used today for such microencapsulation are plastics that persist in the environment for a long time. Until now, no practical, economical substitute has been available to biodegrade naturally.

Much of the burden of environmental microplastics comes from other sources, such as the degradation over time of larger plastic objects such as bottles and packaging and the wear of car tires. Each of these sources may require its kind of solutions for reducing its spread, says Bernedetto Marelli, a co-author of the study and an MIT professor of civil and environmental engineering. The European Chemical Agency has estimated that the intentionally added microplastics represent approximately 10-15% of the total amount in the environment. Still, this source may be relatively easy to address using this nature-based biodegradable replacement.

"We cannot solve the whole microplastics problem with one solution that fits them all," Marelli says. "Ten percent of a big number is still a big number, and we'll solve climate change and pollution of the world one percent at a time."

Unlike the high-quality silk threads used for delicate fabrics, the silk protein used in the new alternative material is widely available and less expensive, Liu says. While silkworm cocoons must be painstakingly unwound to produce the fine threads needed for fabric, for this use, non-textile-quality cocoons can be used, and the silk fibers can be dissolved using a scalable water-based process. The processing is so simple and tunable that the resulting material can be adapted to work on existing manufacturing equipment, potentially providing a simple "drop-in" solution using existing factories.

Silk is considered safe for food or medical use, as it is non-toxic and degrades naturally in the body. In lab tests, the researchers demonstrated that the silk-based coating material could be used in existing, standard spray-based manufacturing equipment to make a water-soluble micro-capsule herbicide product. This method was then tested in a greenhouse on a corn crop. The test showed it worked even better than an existing commercial product, inflicting less damage to the plants, says Muchun Liu, an MIT postdoc who is the study's lead author.

While other groups have proposed biodegradable materials that may work at a small laboratory scale, Marelli says, there is a "strong need" for such a material to work well commercially without sacrificing performance.  

Liu explains that the secret to making the material compatible with existing equipment is in the silk material's tunability. By precisely adjusting the polymer chain arrangements of silk materials and the addition of a surfactant, it is possible to fine-tune the properties of the resulting coatings once they dry out and harden. The material can be hydrophobic (water-repelling) even though it is made and processed in a water solution. Or it can be hydrophilic (water-attracting) or anywhere in between, and for a given application, it can be made to match the characteristics of the material it is being used to replace.

The new method can use low-grade silk that is unusable for fabrics and large quantities of which are currently discarded because they have no significant uses, Liu says. It can also use discarded silk fabric, diverting that material from being disposed of in landfills.

Currently, 90% of the world's silk production takes place in China, Marelli says, but that's mainly because China has perfected the production of the high-quality silk threads needed for fabrics. But because this process uses bulk silk and does not need that quality, production could quickly be ramped up in other parts of the world to meet local demand if this process becomes widely used.

This process "represents a potentially highly significant advance in active ingredient delivery for a range of industries, particularly agriculture," says Jason White, director of the Connecticut Agricultural Experiment Station. He was not associated with the research. "Given the current and future challenges related to food insecurity, agricultural production, and a changing climate, novel strategies such as this are greatly needed."

The research team also included Pierre-Eric Millard, Ophelie Zeyons, Henning Urch, Douglas Findley, and Rupert Konradi from the BASF corporation in Germany. The U.S. BASF supported the work through the Northeast Research Alliance (NORA).

This article was originally published in MIT News. 

TRIZ thoughts: Ideality – Low-grade silk cocoons are cheap, readily available, and a biodegradable resource that can replace non- biodegradable microplastics.

 Rabbit 'hologram' created by levitating screen using sound waves New Scientist.883

Sound waves can be used to keep an object hovering in the air, and a new technique works even in crowded spaces

17 June 2022

By Karmela Padavic-Callaghan

Holagram.027

A rabbit hologram levitated above a 3D-printed rabbit

Ryuji Hirayama, University College London

Ultrasonic sound waves have been used to levitate objects in crowded rooms to make hologram-like displays, and such acoustic levitation was previously only practical in empty spaces. Still, a new algorithm can quickly readjust the sound waves when they encounter an obstacle to keep the object in the air.

Sound waves are comprised of air particles moving together. If manipulated correctly, they can pick up and move objects. However, if the sound waves run into another object that reflects or scatters them, the levitating object can tumble down.

Ryuji Hirayama at University College London and his colleagues used sound to levitate glowing beads to create floating 3D shapes. Now, they have developed a computational technique that enables them to levitate and manipulate objects above bumpy surfaces and near objects.

Hirayama and his colleagues used 256 small loudspeakers arranged in a grid to levitate objects with precisely shaped ultrasound waves. When these sound waves encountered objects that would usually scatter them, like a wall or a houseplant, a computer algorithm quickly adjusted their shape to maintain levitation.

The researchers demonstrated their technique by 3D printing a small plastic rabbit, then levitating objects near it. In one experiment, they made illuminated beads fly around the rabbit in the shape of a butterfly whose "wings" could be controlled by the motion of a researcher's fingers.

In another, they levitated a piece of nearly transparent fabric above the rabbit and made it spin while a projector cast images of the rabbit onto it. The result was a seemingly 3D rabbit hologram hovering above its plastic counterpart.

They also levitated a drop of paint over a glass of water. This experiment showed that their algorithm works even when suspending objects that can change shape above a surface that can wiggle as it reflects sound.

Bruce Drinkwater at the University of Bristol in the UK says that the new technique could project information with lots of "wow factor" in museum displays or advertising. It could also be employed in chemical engineering, using sound waves to mix materials without anyone having to touch them. He says that the new method seems more robust than previous ones, so it could make acoustic levitation practical more broadly.

Hirayama says that, so far, he and his colleagues have only considered acoustic levitation in spaces full of sound-scattering objects that don't move at all or move only in a few predictable ways, such as a hand trying to touch a levitating hologram. Their next goal is to perfect their mid-air object manipulation using sound when everything in the room moves in unexpected and unanticipated ways.

"We want to make this technology practical and have it react to objects in real-time," he says.

Journal reference: Science AdvancesDOI: 10.1126/sciadv.abn7614

Read more: https://www.newscientist.com/article/2324931-3d-rabbit-hologram-created-by-levitating-screen-using-sound-waves/#ixzz7WZT5VM9x

Technology

Chip startups using light instead of wires gaining speed and investmentsREUTERS.966

By Jane Lanhee Lee                                                                                               29 April 2022

Light chip.2096

April 26 (Reuters) - Computers using light rather than electric currents for processing, only years ago seen as research projects, are gaining traction, and startups that have solved the engineering challenge of using photons in chips are getting big funding.

In the latest example, Ayar Labs, a startup developing this technology called silicon photonics, said it had raised $130 million from investors, including chip giant Nvidia Corp (NVDA.O).

While the transistor-based silicon chip has increased computing power exponentially over the past decades as transistors have reached the width of several atoms, shrinking them further is challenging. Not only is it hard to make something so minuscule, but as they get smaller, signals can bleed between them.

So, Moore’s law, which said every two years, the density of the transistors on a chip would double and bring down costs, is slowing, pushing the industry to seek new solutions to handle increasingly heavy artificial intelligence computing needs.

According to data firm PitchBook, silicon photonics startups raised over $750 million last year, doubling from 2020. In 2016 that was about $18 million.

“A.I. is growing like crazy and taking over large data center parts,” Ayar Labs CEO Charles Wuischpard told Reuters. “The data movement challenge and energy consumption in that data movement is a big issue.”

The challenge is that many large machine-learning algorithms can use hundreds or thousands of chips for computing. Using current electrical methods, there is a bottleneck in data transmission speed between chips or servers.

Light has been used to transmit data through fiber-optic cables, including undersea cables, for decades, but bringing it to the chip level was challenging as devices used for creating light or controlling it have not been as easy to shrink as transistors.

PitchBook’s senior emerging technology analyst Brendan Burke expects silicon photonics to become standard hardware in data centers by 2025 and estimates the market will reach $3 billion by then, similar to the market size of the A.I. graphic chips market in 2020.

Light chip.2099

Light chip.2100

A view of a PsiQuantum Wafer, a silicon wafer containing thousands of quantum devices, including single-photon detectors, manufactured via PsiQuantum’s partnership with GlobalFoundries in Palo Alto, California, U.S., in an undated photo taken in March 2021. PsiQuantum/Handout via REUTERS T

Beyond connecting transistor chips, startups using silicon photonics for building quantum computers, supercomputers, and chips for self-driving vehicles are also raising significant funds.

PsiQuantum has raised about $665 million so far, although the promise of quantum computers changing the world is still years out.

Lightmatter, which builds processors using light to speed up A.I. workloads in the data center, raised $113 million and will release its chips later this year and test with customers soon after.

Luminous computing, a startup building an A.I. supercomputer using silicon photonics backed by Bill Gates, raised $115 million.

It is not just the startups pushing this technology forward. Semiconductor manufacturers are also gearing up to use their silicon chip-making technology for photonics.

GlobalFoundries Head of Computing and Wired Infrastructure Amir Faintuch said collaboration with PsiQuantum, Ayar, and Lightmatter has helped build up a silicon photonics manufacturing platform for others to use. The platform was launched in March.

Peter Barrett, founder of venture capital firm Playground Global, an investor in Ayar Labs and PsiQuantum, believes in the long-term prospects for silicon photonics for speeding up computing but says it is a long road ahead.

“What the Ayar Labs guys do so well ... is they solved the data interconnect problem for traditional high-performance (computing),” he said. “But it’s going to be a while before we have pure digital photonic compute for non-quantum systems.”

To read the complete article, go to: https://www.reuters.com/technology/chip-startups-using-light-instead-wires-gaining-speed-investments-2022-04-26/

In TRIZ, this is a new “S-curve” and evolution in computer chip development. The replacement of transistor-based silicon chips with silicon photonics. The challenge of making silicon chips smaller and smaller has given way to a new technology using light instead of electricity. This evolution of the technology will allow for increased speed and a smaller size of new systems.

Technology                                   Universal Science.701

Scientists developed transparent solar cells that can be used in windows and last for 30 years

September 21, 2021

Generating electricity outside cities and transporting it into the city comes with much power loss. Ideally, as much of it as possible should be generated locally. Scientists have found a way to alleviate this issue by developing a new type of transparent solar cell that can be used in the windows of buildings and is expected to last for 30 years.

Glass Buildings2056

 

 

 

 

 

 

 

 

file:///C:/Users/Owner/AppData/Local/Temp/msohtmlclip1/01/clip_image002.jpg" alt="Tall buildings have a lot of solar energy potential with loads of glass surface - Song_about_summer via Shutterstock / HDR tune by Universal-Sci" width="624" height="416" />

Tall buildings have a lot of solar energy potential with loads of glass surface - Song_about_summer via Shutterstock / HDR tune by Universal-Sci

The need for eco-friendly energy production is increasing at a rapid pace. According to research, solar is one of the cheapest methods to generate electricity. A problem with solar power stations is that they take up a lot of space, and while cities generally need the most electricity, they have the least amount of space for power inside cities. 

While silicon is still the most efficient material for solar panels, it is not transparent. Researchers at the University of Michigan, North Carolina State University, Tianjin University, and Zhejiang University have looked into organic or carbon-based materials for window-friendly solar panels. They published their work in the science journal: Nature Communications

The main challenge was to keep highly efficient organic light-converting materials from rapidly deteriorating when used. The resilience of these materials is derived from the molecules that move photogenerated electrons to the electrodes. These materials are called "non-fullerene acceptors" to distinguish them from the more sturdy but less efficient "fullerene acceptors" created from nanoscale carbon mesh. Solar cells made with non-fullerene acceptors that incorporate sulfur can reach efficiencies of 18% (an efficiency level that rivals silicon). Still, the problem is that they have a very short lifespan.

Yongxi Li, the first author of the study, explains that non-fullerene acceptors cause very high efficiency but incorporate weak bonds that easily separate under high-energy photons, especially with ultraviolet photons found in sunlight.

After examining the nature of the non-fullerene acceptor deterioration, the researchers discovered that the exposed solar cells only needed reinforcements in a few spots. Firstly, ultraviolet rays would have to be blocked off. To do so, the team added a coating of zinc oxide, a popular sunscreen component on the side of the glass that faces the sun. 

A thinner zinc oxide layer next to the light-absorbing area improves the conduct of solar-generated electrons to the electrode. Sadly, this also tears down the fragile light absorber. The researchers added a layer of a carbon-based material called IC-SAM as a buffer to solve this issue.

On top of that, the electrode that draws positively charged "holes" into the circuit (essentially spaces abandoned by electrons) can react with the light absorber. So to protect that flank, the team added another buffer layer in the form of a fullerene shaped like a soccer ball. (A fullerene is an allotrope of carbon whose molecule consists of carbon atoms joined by single and double bonds to form a closed mesh, with fused rings of five to seven atoms.)

Glass Buildings2057

 

 

 

 

 

 

 

 

file:///C:/Users/Owner/AppData/Local/Temp/msohtmlclip1/01/clip_image004.jpg" alt="One of the researchers holding a solar cell module with 40% transparency, based on the new design with an estimated life expectancy of 30 years - (Image Credit: Robert Coelius, University of Michigan Engineering)" width="566" height="377" />
The researchers then put their new reinforcements to the test under various intensities of simulated sunshine, ranging from 1 sun to 27 suns, and temperatures as high as 65 degrees celsius. They projected that the solar cells would still be functioning at 80% efficiency after 30 years based on how performance declined under these conditions.

 One of the researchers holding a solar cell module with 40% transparency, based on the new design with an estimated life expectancy of 30 years - (Image Credit: Robert Coelius, University of Michigan Engineering)

After examining the nature of the non-fullerene acceptor deterioration, the researchers discovered that the exposed solar cells only needed reinforcements in a few spots. Firstly, ultraviolet rays would have to be blocked off. To do so, the team added a coating of zinc oxide, a popular sunscreen component on the side of the glass that faces the sun. 

A thinner zinc oxide layer next to the light-absorbing area improves the conduct of solar-generated electrons to the electrode. Sadly, this also tears down the fragile light absorber. The researchers added a layer of a carbon-based material called IC-SAM as a buffer to solve this issue.

On top of that, the electrode that draws positively charged "holes" into the circuit (essentially spaces abandoned by electrons) can react with the light absorber. So to protect that flank, the team added another buffer layer in the form of a fullerene shaped like a soccer ball. (A fullerene is an allotrope of carbon whose molecule consists of carbon atoms joined by single and double bonds to form a closed mesh, with fused rings of five to seven atoms.)

Glass Buildings2058

file:///C:/Users/Owner/AppData/Local/Temp/msohtmlclip1/01/clip_image006.png" alt="An extreme close-up of a cross-sectional slice of an OPV with the added layers of material (IC-SAM and C70) between the organic material and the external buffers. After the material was subjected to high-intensity light to replicate an estimated age of 30 years. It reveals an intact organic active region with no breakdown at the edges. (Image Credit: Kan Ding, University of Michigan)" width="604" height="328" />

The slide shows an extreme close-up of a cross-sectional slice of an OPV with the added layers of material (IC-SAM and C70) between the organic material and the external buffers. After the material was subjected to high-intensity light to replicate an estimated age of 30 years, it revealed an entire organic active region with no breakdown at the edges. (Image Credit: Kan Ding, University of Michigan)

Because the materials can be prepared as liquids, the manufacturing costs are expected to be relatively low, increasing the feasibility for wide-scale use. All in all, we are yet another step closer to actual solar power-generating windows with these findings. 

Idealized Supply Chain Design: Towards Ideality

  • Published on October 15, 2021

Idealised Supply Chain Design 2003

 Igniting Food Systems Revolution | Industry 4.0 Partner @ Cognizant

Five years ago, I came across an OODA loop model (observe-orient-decide-act) developed in the 1960ies by military strategist Air Force Colonel John Boyd to support fighter pilots and their decision-making tasks on the battlefield. Indeed, you might draw parallels with some modern frameworks, but this is not an objective here. What impressed me is his serious approach to assessing implications triggered by the second law of thermodynamics and its further impact on military strategy.

Idealised Supply Chain Design 2004

Boyd’s definition of the law was much more “user friendly” than the versions we usually find in the works of Kelvin, Plank, and others. He stated that:

 “… entropy of any closed system always tends to increase and thus the nature of any given system continuously changes even as efforts are directed toward maintaining it in its original form.”

Idealised Supply Chain Design 2005

That was an exciting insight that stimulated my further thinking in the supply chain domain. Despite all efforts that we put in to control the flows, the environment continuously brings us even more challenging conditions. We try to impose even more control and organization over it. But how successfully?

I have not continued massaging this mental model too long, as my attention slipped to another point that this law highlights. Any organized system will continuously tend towards the increasing levels of entropy, at the same time, it always aims at minimizing its energy state. Well, the more we want to organize and control the system, the more energy we will spend on it, yes?

While incubating those cosmic thoughts, I felt like Don Quixote fighting with windmills in real life. The nature of supply chain challenges I had to orchestrate with my global team felt like solving fundamental contradictions rather than trade-offs that we historically managed across many industries. I had to build operating models that were both agile and lean, deliver superior service to the markets while meeting aggressive cost-cutting goals, activating efficiencies, and increasing the level of customized solutions. Contradictions.

Idealised Supply Chain Design 2006

How can we orchestrate supply chain contradictions? During my next jump into a mental rabbit hole, this is the question I was researching. One online statement captured my attention: “…every technical system aims to reach its maximum state of ideality — function is performed (or need is fulfilled), but there is no system”.

That made sense considering conclusions from the second law of thermodynamics and minimization of energy consumption made me pause. From that moment, my introduction with Genrich Altshuller started (founder of TRIZ) that later led to the definition of Idealized Supply Chain Design principles inspired by pieces of training and work of TRIZ Master Valery Souchkov.

Idealised Supply Chain Design 2007

Let me first give you an initial overview of TRIZ itself.

TRIZ (translates from Russian as Theory of Inventive Problem Solving) had its starting point back in 1946 when Soviet inventor Genrich Altshuller and his colleagues ran research with thousands of patents. They discovered and coded specific patterns of technical systems’ evolution.

The main message is that evolution of a technical system is not a random process but is driven by specific regularities and has consistent patterns. These patterns can be used to consciously develop a system along its path of technical evolution and get to the breakthrough innovation by solving very complex problems in engineering. By that time, Altshuller and his students came with a definition of “contradictions” and developed frameworks that are helping to solve issues on the lowest levels of any technical system. Today technical TRIZ is actively applied in such companies as Samsung, Philips, LG Electronics, and many other names from the Fortune 500 list.

Since the 2000s, TRIZ had another evolutionary boost by actively expanding its application in other non-technical domains thanks to the passion and the drive of a tiny and dedicated group of professionals: in education, business, and software engineering. My kids are an excellent example as they participate in several training platforms that utilize TRIZ educational methods to bolster children’s creativity and systematic problem solving.

Let’s get to the basics now…

One of the core principles we will work with is the system’s ideality. As supply chains are engineered systems, those are architected to perform several functions. Based on the principle of ideality: 

a perfect supply chain performs all the functions it was designed for without using any resources or creating any harm to itself, people, its product, the environment, or society. 

In other terms: the consumer receives the product, but there is no supply chain behind it.

Any change in the supply chain that improves the performance of its designed functions or reduces the number of resources or the degree of harmful effects and waste will advance it towards ideality. Such a mental model is critical when we zoom out to the level of strategic planning and business / operating model architecture.

It is essential to note the focus is on the function and not the supply chain itself

The operating model of the supply chain can be changed from the ground up as long as the new model meets its intended goals.

For instance, if we want to deliver a specific product to a consumer, we have several models at our disposal, each of which applies different operating principles:

‣ Product is packed and delivered by the courier to the consumer’s door

‣ Pack is picked by an autonomous vehicle and delivered to the destination

‣ Consumer downloads CAD technical drawing and prints the product at home

The delivery of the goods will be considered ideal when: the product appears in the right place, at the right time, and in the correct quantity without requiring any logistical system to get it to the consumer. Science Fiction? Not necessarily.

The objective of such a model is not to create a Space conquest strategy for the Supply Chain Management domain (although you might certainly discuss this idea with Elon Musk as he will need Space Logistics experts soon). Rather than move us outside of psychological inertia when we are trying to solve problems across the value chain in the same way others do. This barrier to innovation becomes even more critical when we intend to redesign the whole operating model of the enterprise to activate new levels of competitive advantage.

What often happens? We have a hammer, and we see the nails everywhere. As a result, we tend to solve problems or implement solutions based on predefined templates of models and behaviors instead of creating tailored and unique system structures relevant only to you, your company, and the particular context of the business.

Why is this important within the strategic layer of the supply chain model? Because technology becomes quickly obsolete and is becoming quickly commoditized, causing the problem that it is hard to build a competitive strategy just on this. Exciting potential lies in an intelligent blending of different elements to activate new capabilities and functions.

The best way to represent such a level of ideality through the simple formula that I have adopted from the works of Valery Souchkov:

Idealised Supply Chain Design 2009

Li = Level of Supply Chain ideality 

Vc = Value-creating factors (e.g., speed, flexibility, consumer satisfaction, etc.)

Vd = Value reducing factors (e.g., waste, environmental impact, long lead times, etc.)

Cost = all the costs associated with running and executing such a supply chain

The formula clearly states where our focus should be.

When discussing the topic of ideality, it is worth considering some of the trends related to the evolution of the technical system.

Idealised Supply Chain Design 2010

I have listed just a few examples that boost the state of the supply chain’s ideality. Still, we will have a deeper look into that in later publications considering that this forms an essential foundation when an organization sets an ambition to make a quantum leap in its business performance.

Before we deep dive into the some of those trends in the context of our daily operational realities, I should clarify some relevant definitions related to:

‣ Trimming

‣ Supersystem

Trimming in the supply chain. When applying the trimming process, we look for ways to reduce (to trim) some of the components involved in a specific supply chain activity. Their functions are redistributed amongst other (existing or improved) components. This process might lead to better operational costs, safety, and processing speed.

Idealised Supply Chain Design 2011

Let’s assume that based on functional analysis (analogue of process analysis/value stream mapping), we have identified a set of activities that are performed within a specific part of the supply chain. For example, and to provoke some out-of-box thinking, let’s look at the case of international container shipments causing many challenges today due to global imbalance on the freight markets.

The Core Elements of the ocean freight flow are the following:

Vessel — Container — Product

Idealised Supply Chain Design 2012

We know that a vessel’s availability is one of the factors currently causing adverse effects within our supply chain system. So, let’s take it out of the equation and delegate the transportation function to the container. So, the new system would be something like this:

Idealised Supply Chain Design 2013

Imagine the ocean full of self-sailing and navigating containers using electro-drive powered by solar energy.

Trimming is a powerful standalone tool that can help reduce costs across the value chain and should be used as collaborative work between procurement and technical functions. I will publish different material on this subject.

Supersystem of the supply chain. The supersystem is one of the core elements of the Idealised Supply Chain Design framework. The main point here is that the supply chain is not an isolated activity but a part of the specific supersystem that the company operates in.

Here I am going to extend the work of Valery Souchkov. He made a deep analysis of the supersystem within TRIZ by defining the key elements of the supersystem’s completeness:

‣ Supersystem as a source of resources for a supply chain system

‣ Supersystem as a driver of lifecycle within a supply chain system

‣ Supersystem as a means of control within a supply chain system

‣ Supersystem as a source of a target for a supply chain systems existence

Idealised Supply Chain Design 2014

We can certainly draw a parallel between the supersystem and business ecosystem, which would be correct to a certain extent, considering there are elements mutually shared between both structures. In this context, a more correct view would be that the business ecosystem represents a sum of several supersystems that share common targets and objectives.

Now, here are some key elements related to our Supersystem:

🅐 Supersystem provides Resources necessary for Supply Chain System to execute its function and target. For example, it is possible to execute its function thanks to accessing the Sources = Vendors who are forming a critical part of the supersystem

🅑 Supersystem provides the Lifecycle of Supply Chain System, meaning that it provides everything necessary to ensure that various execution stages can be performed within the system. A minimum number of subsystems are available within a Supersystem to ensure that planning, sourcing, production, delivery, and returns can be executed. For example, in the case of Deliver stage, Supersystem provides an infrastructure (e.g., roads) that enables the execution of logistical function within the supply chain

🅒 Supersystem provides Control over the Supply Chain System that is activated thanks to the minimal number of subsystems within the Supersystem itself. For example, the driver operates the forklift in the warehouse, but the Warehouse Management System is used to guide the picking locations

🅓 Supersystem as a source of Supply Chain Systems Target provides a minimal number of subsystems necessary to ensure that the system effectively executes its primary purpose and function. For example, Supersystem provides the customers or consumers who have specific expectations related to the final product of such a system. Otherwise, the system would be supplying the goods to the markets that do not exist (previously I would joke regarding the supply of refrigerators to the North Pole, but with all the environmental challenges and temperature changes past 5 years, this might become a new reality)

Now we come to the Supply Chain System itself. It has a minimal viable structure (MVS) that usually includes Transfer Units, Conversion Units, Control Centre, and Internal Users. But in order not to overload this publication, I will skip details at this stage.

Let’s return to our conversation about the technical systems trends and their applicability within the supply chain domain.

Most interesting is that by combining the knowledge about such trends with S-curve analysis, we are well-positioned to forecast several scenarios about supply chain systems’ further development in the context of specific business activity. This knowledge gives us the means to design a long-term transformation roadmap for a specific operating model and its subsystems. But this should be performed in combination with trends of consumer behaviors evolution that we will analyze in another upcoming publication.

Why is this important? Idealized Supply Chain Design operates across several layers, each with its own set of tools and toolkits that could be applied to solve strategic, resource-based, and operational challenges within the supply chain.

Idealised Supply Chain Design 2015

This information means that we will need to work from a strategic level. We are having a systematic look at the supply chain as a whole, including analyzing its subsystems and the current state of the supersystem. We do not want supply chain engineers and architects to implement the changes across the business value chain so that no supersystem can support this.

We do not need to go far away, for examples. We already have hydrogen-powered engines and cars available, but there is a lack of access to the fuel itself. Another example is when you develop automated warehouses for inbound cargo handling. Still, your supply base is not ready to adopt because of new palletization requirements so that you can have smooth warehousing operations and a low level of exceptions handling.

We move into resource management from the strategic level blended with processes and flows. Those are the layers where we usually start facing first contradictions that will not accept simple trade-offs but require a disciplined and structured work of resolving them. And that is where the miracle of idealized supply chain design begins.

This publication contains an introductory overview to give the reader a high-level understanding of some essential principles related to the Idealised Supply Chain Design. The following article will provide insights about the “soft” elements of the discipline — psychological inertia and role of abstractions, why those are important and relevant for supply chain practitioners.

Stay tuned. More to be published on the topic…

About the Author: Aleksandr is Advisory Partner Industry 4.0 @ Cognizant. He is on a mission of supporting clients across AgriFood, FMCG, and Retail sectors to accelerate value chain transformation boosted by the Internet of Things (IoT) and intelligent data leverage. Previously he spent 18 years shaping operational capabilities for the Fortune 500 companies: Olam International, Holcim, Johnson & Johnson, Caterpillar / Zeppelin, and Procter & Gamble.

He blends his professional activity with a personal exploration of startup ecosystems combined with research related to Lean Six Sigma, Inventive Problem Solving (TRIZ), and Systems Engineering to activate solutions for the most challenging issues across the supply chain domain.

More information: www.sidorec.com

Idealized Supply Chain Design (ISCD) by Aleksandr Sidorec is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

OUR CONSCIOUS AND SUBCONSCIOUS MINDS: A POWERFUL DUO  By Stuart G. Walesh

  1. Abstract

This article explores the idea that professionals will be even more innovative if they proactively use their complementary conscious and subconscious minds. It begins by describing our two minds with emphasis on how they work. The article introduces the double-diamond process as a way of thoroughly defining and then solving a problem. It discusses tools available for taking a whole-brain approach -- left and right hemisphere and conscious and subconscious mind -- to innovative problem solving. The article concludes by offering ideas on why some tools work well, especially in more fully engaging our complementary conscious and subconscious minds.

  1. Introduction

This article presents the basics of our brain’s conscious and subconscious thinking processes and asserts that knowledge enables us to be even more innovative. We often hear about our brain’s very different left and right halves or hemispheres and benefit from that information. In contrast, we seem to be less conscious of our conscious and subconscious thinking. Let’s rectify that imbalance so that we are prepared to engage in what I call whole-brain thinking (Walesh 2017) -- intentional and effective use of our left and right hemispheres and conscious and subconscious minds.

I use the word subconscious to refer to brain processes that lie below what professor Rollo May (1976) calls our “level of awareness.” Other terms for the subconscious are unconscious and preconscious.

We are very aware of the capabilities of our conscious mind. It explicitly drives our work and other lives. My hope is that you will, because of reading this article about conscious and subconscious thinking, more proactively engage your subconscious mind.

  1. Location of Our Conscious and Subconscious Minds

As explained by neuropsychologist Paul D. Nussbaum (2010), we can think of the brain divided into ‘top-down’ orientation, with the cortex at the top and the subcortex at the bottom as suggested here:

Nussbaum says that the cortex “is a convoluted mass of cells, with folds and flaps that sits snug within your skull.” He explains that “the cortex is primarily responsible for the most complex thinking abilities, including memory, language, planning, concept formation, problem solving, spatial representation, auditory and visual processing, mood, and personality.” Cortex processing is conscious; it is intentional.

Positioned beneath the cortex, the more primitive subcortex “primarily processes rote skills and procedures” with most of the processing being subconscious (Nussbaum 2010). Examples of subconscious activities are word processing, tying your shoes, and driving -- things we do habitually. The cortex and subcortex connect in many ways and work very effectively together.

Scientists share widespread agreement about the existence of conscious and subconscious cognitive processes but the precise location of the processes is somewhat uncertain. For example, while Clayman (1991) and Mlodinow (2013) generally support Nussbaum’s cortex - subcortex model, biologist and researcher John Medina (2008) says, “We don’t know the neural location of consciousness, loosely defined as that part of the mind where awareness resides.” Nussbaum (2014) says there is “no real conflict” because “the brain does work in harmony, yet it can also maintain regional specialization.”

  1. How Our Conscious and Subconscious Minds Work

Psychiatrist Scott Peck (1997) says, “The conscious mind [drawing on information from our senses and memory] makes decisions and translates them into actions.” As an example of using your conscious mind, you define a problem, develop alternative solutions, compare them, select one, and recommend it. You are aware of the cognitive processing required for that process. With our conscious mind, we are thinking and we know it.

In contrast, the cognitive processing in the subconscious mind occurs without our being aware of it. “The [subconscious] mind resides below the surface;” according to Peck, “it is the possessor of extraordinary knowledge that we aren’t naturally aware of.” In the case of our subconscious mind, we are thinking and we don’t know it. During that conscious problem-solving process described in the previous paragraph, we can be certain that the subconscious mind is influencing, unbeknownst to us, the process.

One indication of the functioning of your subconscious mind: That great idea that “pops into your head” or “comes out of the blue.” The subconscious mind, if we can more effectively use it, has great potential as suggested by writer and anthropologist Martha Lagace (2012) who said “Our conscious mind is pretty good at following rules, but our unconscious mind -- our ability to think without attention -- can handle a larger amount of information. Studying the unconscious mind offers exciting new avenues for research, including creativity, decision making, and sleep.”

Considering further the relative impact on us of our conscious and subconscious minds, neuroscientist David Eagleman (2012) writes “consciousness is the smallest player in the operations of our brain. Our brains run mostly on autopilot.” The biggest player is our subconscious mind, which, as stated by Peck (1997), “resides below the surface.” As illustrated metaphorically in the following figure, conscious cognitive processing is the tip of the iceberg; subconscious cognitive processing is much larger and invisible.

Consider some more metaphors to help understand our conscious and subconscious minds and how they work together in a complementary manner. They also suggest how you can cause them to work even better together (all from scientist and theologian Murphy (2000), except as noted):

  • The conscious mind is the camera and the subconscious mind is the image you want to capture, so point your “camera” at the things you want to capture.
  • The conscious mind sees reality while the subconscious mind cannot tell the difference between reality seen by the conscious mind and that imagined by the conscious mind (Tice 2002). Therefore, consciously imagine and visualize those good things you desire and your subconscious mind will accept and work on them as though they were an evolving reality.
  • The conscious mind selects and plants seeds and the subconscious mind germinates and grows them. Select seeds for the crop you want to harvest.

 

  • Your conscious mind is the cause; your subconscious mind, the effect. Choose your causes carefully.

 

  • “You can give problem-solving and idea-getting tasks to your [subconscious] mind, send it off on a search while you do other things, even while you sleep, and have it return with useful material you didn’t know you knew and might never have obtained through conscious thought or worry” (Maltz and Kennedy 2001).

 

  • The conscious mind is a part-time worker while the subconscious mind works full-time; it never sleeps (Gibb 2012). Use the limited time available with your conscious mind to direct and fully utilize the 24/7 efforts of your subconscious mind.
  • The conscious mind is the ship’s captain and the subconscious mind a fast ship with a resourceful crew.
  • According to author Richard Carlson (1997), the subconscious mind is the back burner of your mind that “mixes, blends, and simmers ingredients into a tasty meal.” He advises us to feed our always-available back burner with a “list of problems, facts, and variables, and possible solutions,” let them simmer, and expect a pleasing result.

The following table further explains major differences between our conscious and subconscious cognitive processing. By leveraging those great differences, that is, by being aware of and linking them, we can enhance, individually and collectively, our innovation.

Conscious

Subconscious

When thinking, we know it

When thinking, we don’t know it

Intermittent

24/7

Linear processor

Parallel processor

Slow

Fast

Prefers complete information in order to decide/do

Can work with pieces

Sees, or thinks it sees, what can be accomplished

Believes that what is imagined by the conscious mind can be achieved and goes to work on it

Does not control dreams

Controls dreams

Can change habits

Source of habits

 Genrich Altshuller recognized the existence and complementary roles of our conscious and subconscious minds. He noticed that “many inventions were made in three steps. First, an inventor intensely and unsuccessfully searches for a solution. Then, having not solved the problem, he stops thinking about it. Some time passes, and suddenly, as if a delayed action mechanism goes off -- ‘as by itself” -- the required solution appears” (Altshuller 1999).

Self Healing Material.1908

December 21, 2021

One of the critical issues with the current trend of bigger and bigger screens found on modern smartphones is that they get more easily damaged due to their size. On top of that, these larger, more complex screens are usually more expensive to replace or repair.

Scientists from Korea might have found a solution for this particular problem as they developed a self-healing material that can self-repair cracks and even restore damaged functions.Self Healing Material.1905 Image Credit: encierro via Shutterstock / HDR tune by Universal-Sci

The Korean researchers focused their attention on a material called CPI (colorless polyimide). The use of this material is widespread. It possesses excellent durable properties alongside good tensile strength, making it ideal for application in industries such as the aviation industry. Still, the material is also used in solar cells and flexible screens for foldable devices. 

Several efforts have been made to improve the durability of CPI as the potential return on investment is enormous due to the material's large-scale use. Unfortunately, these attempts have, as of yet, not been successful. This makes potential self-healing technology even more relevant. 

One of the challenges the research team encountered was that any self-healing technology should not tamper with the beneficial properties of colorless polyimide, including its transparency. The scientists opted for a creative solution, using linseed oil obtained from the flax plant seeds, also known as linseed. Linseed fibers are commonly used to make linen, while the researchers are more interested in its seeds. Perhaps there is some potential for synergy there. Oil extracted from flax seeds is quickly hardened and ideal to use as a coating material. 

Using the flax oil, the team constructed oil-loaded microcapsules, with which they built the topmost healing layer by mixing them with silicone. When damage occurs, these microcapsules are supposed to break, leaking the flax oil filling the cracks, which quickly hardens, effectively restoring the screen. A considerable benefit derived from the mixed oil's liquid properties is that it naturally finds its way to the damaged area, repairing any local blemishes. ile:///C:/Users/Owner/AppData/Local/Temp/msohtmlclip1/01/clip_image005.jpg" alt="The self healing layer is flexible, twistable, bendable and foldable - Image Credit: Korea Institute of Science and Technology( KIST)" width="624" height="380" />

Self Healing Material.1906

The self-healing layer is flexible, twistable, bendable, and foldable - Image Credit: Korea Institute of Science and Technology( KIST)

Another advantage of this semi-liquid solution is that it is usable on hard materials such as smartphone screens. Current self-repairing technology can only be applied to soft materials. The idea here is that the soft material melts by applying heat, smoothing out scratches and other marks. The microcapsule material renders an additional liquid layer on top of hard materials; this layer only becomes liquid when damaged. It can do so at room temperature without any additional heat requirements. It quickly hardens after it is exposed, making the surface hard again. The healing process can be performed within only 20 minutes as the material interacts with UV light. Simple but ingenious. 

Self Healing Material.1907

Robotic Fish Tail and Elegant Manufacturing Business Tech.978

Math Could Inspire Underwater Drones

Engineers uncovered the secrets of highly efficient swimming at varying speeds.

Tuna.979                                                                                               

Picture by iStock

Underwater vehicles are typically designed for one cruise speed, and they’re often inefficient at other speeds. The technology is rudimentary compared to the way fish swim well, fast or slow.

What if you want your underwater vehicle to travel fast through miles of ocean, then slow down to map a narrow coral reef, or speed to the site of an oil spill then throttle back to take careful measurements?

Dan Quinn, an assistant professor at the University of Virginia School of Engineering and Applied Science, and his colleague, recent UVA Ph.D. graduate and postdoctoral researcher Qiang Zhong, discovered a key strategy for enabling these kinds of multispeed missions. They have demonstrated a simple way to implement this strategy into robots, ultimately transforming underwater vehicle design. Their work was recently published in Science Robotics.

When designing swimming robots, a question that keeps coming up for researchers is how stiff the piece that propels the robots through the water should be made? It’s tricky because the same stiffness that works well in some situations can fail miserably in others.

Solving complex problems with even more complex solutions is a guarantee for even more problems.Industry Week.694

Questions.953 by Arno Koch

JUN 22, 2021

Now that even my fridge has to be "smart" and a simple sensor is called "4.0," I wonder. Does everyone have the same idea about the definition of Industry 4.0, and how far are we now really in this process? And what is 4.0 really about? I like to distinguish fact from fiction, and here are my conclusions.

Henry Ford created flow production, which allowed large quantities of the same product to be produced at affordable costs. He described this in several (still very readable) books. On the other side of the world, after World War II, these books were closely studied by one of his competitors: Toyota.

Toyota, originally a machine builder of looms, implemented Ford's principles in detail. However, there were two differences:

1. Whereas in the U.S., everything was abundant, on the Japanese island, with hardly any natural resources, scarcity of everything has traditionally prevailed, giving rise to an incredible focus on eliminating waste.

2. The Japanese have learned that only by working in close harmony with their environment and cooperating with the people around them can they produce enough to survive together.

These two characteristics were precisely decisive in the Toyota Production System: By working together in the constant struggle against losses, the famous TPS was developed after World War II during a phase of deadly liquidity problems.

MATERIALS

World's first graphene-enhanced New Atlas.873

concrete slab poured in England

By Nick Lavars                                                             May 25, 2021

 Concretene.884

University of Manchester researchers Craig Dawson, Happiness Ijije and Lisa Scullion onsite as workers tend to the world's first graphene-enhanced concrete slab in the background.

University of Manchester/Nationwide Engineering

As the mostly widely used material on Earth, concrete has a massive carbon footprint that scientists are working to chip away at in all sorts of ways. Recent research projects have demonstrated how the wonder material graphene could play a role in this and now we are seeing the first real-world deployment of the technology, with engineers using so-called "Concretene" to form the foundations of a new gym in the UK.

As the world's strongest artificial material, graphene may have a lot to offer the world of construction, among its many other potential uses. Scientists have previously found success incorporating it into the concrete manufacturing process to make the finished product stronger and more water-resistant, while one research project even demonstrated how this graphene can be recovered from old tires.

MATERIALS

World's first graphene-enhanced New Atlas.873

concrete slab poured in England

By Nick Lavars                                                             May 25, 2021

 Concretene.884

University of Manchester researchers Craig Dawson, Happiness Ijije and Lisa Scullion onsite as workers tend to the world's first graphene-enhanced concrete slab in the background.

University of Manchester/Nationwide Engineering

As the mostly widely used material on Earth, concrete has a massive carbon footprint that scientists are working to chip away at in all sorts of ways. Recent research projects have demonstrated how the wonder material graphene could play a role in this and now we are seeing the first real-world deployment of the technology, with engineers using so-called "Concretene" to form the foundations of a new gym in the UK.

As the world's strongest artificial material, graphene may have a lot to offer the world of construction, among its many other potential uses. Scientists have previously found success incorporating it into the concrete manufacturing process to make the finished product stronger and more water-resistant, while one research project even demonstrated how this graphene can be recovered from old tires.

Future Time line.838

First Human Uses High-Bandwidth Wireless Brain-Computer Interface

The device is called "BrainGate."

2 April  2021

Kevin Stacey

Wireless Brain Computer Interface.839

Brain-computer interfaces (BCIs) are an emerging assistive technology, enabling people with paralysis to type on computer screens or manipulate robotic prostheses just by thinking about moving their bodies. For years, investigational BCIs used in clinical trials have required cables to connect the brain's sensing array to computers that decode the signals and use them to drive external devices.

A BrainGate clinical trial participant uses wireless transmitters that replace the cables usually used to transmit signals from sensors inside the brain.Braingate.org

For the first time, BrainGate clinical trial participants with tetraplegia have demonstrated an intracortical wireless BCI with an external wireless transmitter. The system can transmit brain signals at single-neuron resolution and in full broadband fidelity without physically tethering the user to a decoding system. The traditional cables are replaced by a small transmitter about 2 inches in its largest dimension and weighing a little over 1.5 ounces. The unit sits on top of a user's head and connects to an electrode array within the brain's motor cortex using the same port used by wired systems.

What’s Ahead for Product Development & Innovation in 2021Industry Week.694
Road.639

Four industry predictions from a manufacturing-tech strategist

John McEleney    JAN 22, 2021

Like all industries, the design and manufacturing world experienced a shockwave of changes in 2020 due to the pandemic. As a result, many product development companies suddenly became more agile to pivot in new and innovative ways. Amidst the continued challenges of COVID-19 in 2021, this needs to be agile continues for businesses of all sizes.

Collaboration among remote workers will be just as important next year. As product developers create innovative solutions to rapidly changing problems, lead times from concept to manufacturing will need to keep shrinking to get products to market in time—before market conditions change yet again.

With all that in mind, here are my top four industry predictions for 2021:

What's Fueling the EV Drive?

Electric vehicles are garnering significant attention, but is the market ready to make the shift?

Peter FrettyIWeek.141
JAN 27, 2021 

 

EV Cars

The age of electrification is here. Perhaps, more accurately, noticeable excitement and momentum are surrounding the ongoing development of electric vehicles (EVs). And, growing regulations around lower tailpipe emissions, increasing availability (and announced availability) of attractive options from both start-ups and automotive mainstays, as well as improving battery technology and better economics are collectively fueling the current surge.  

"There is a fundamental belief that we are in a transitional period right now. The future will certainly include internal combustion engines (ICE) – although they won't have the same penetration as they enjoy today. And, we will most certainly have EVs," says Brian Irwin, managing director of Accenture's, automotive and mobility practice in North America.

Adding to the EV optimism, consumers who have already invested in electric vehicles are overwhelmingly impressed with the ICE alternatives. According to the results of JD Power's first US Electric Vehicle Experience Ownership Study, a whopping 95% of EV owners whose overall ownership satisfaction exceeds 900 points say they will purchase another EV with roughly two-thirds (64%) noting that they will repurchase the same brand. However, brand loyalty lessens as satisfaction declines. Specifically, owners whose satisfaction is between 600 and 750 points, 77% indicate they 'definitely will' purchase another EV—although their likelihood of repurchasing the same brand is only 25%."

Smart Concrete Paves Way to High-Tech RoadsIEN Logo.476

Smart, efficient infrastructure just makes sense.

Nov 17th, 2020

Luna Lu & Vishal Saravade, Purdue University

Pavement.616

Every day, Americans travel on roads, bridges and highways without considering the safety or reliability of these structures. Yet much of the transportation infrastructure in the U.S. is outdated, deteriorating and badly in need of repair.

Of the 614,387 bridges in the U.S., for example, 39% are older than their designed lifetimes, while nearly 10% are structurally deficient, meaning they could begin to break down faster or, worse, be vulnerable to catastrophic failure.

The cost to repair and improve nationwide transportation infrastructure ranges from nearly US$190 billion to almost $1 trillion. Repairing U.S. infrastructure costs individual households, on average, about $3,400 every year. Traffic congestion alone is estimated to cost the average driver $1,400 in fuel and time spent commuting, a nationwide tally of more than $160 billion per year.

I am a professor in the Lyles School of Civil Engineering and the director of the Center for Intelligent Infrastructures at Purdue University. My co-author, Vishal Saravade, is part of my team at the Sustainable Materials and Renewable Technology (SMART) Lab. The SMART Lab researches and develops new technologies to make American infrastructure “intelligent,” safer and more cost-effective. These new systems self-monitor the condition of roads and bridges quickly and accurately and can, sometimes, even repair themselves.

Imagine a flexible digital screen that heals itself when it cracks.

5 June, 2020 - National University of Singapore

Principle 30 Flexible skin for robots.107

The NUS research team behind the novel electronic material: Assistant Professor Benjamin Tee, center, Wang Guanxiang, left, and Dr. Tan Yu Jun. -- National University of Singapore

Imagine a flexible digital screen that heals itself when it cracks or a light-emitting robot that locates survivors in dark, dangerous environments or carries out farming and space exploration tasks. A novel material developed by a team of researchers from the National University of Singapore (NUS) could turn these ideas into reality.

The new stretchable material, when used in light-emitting capacitor devices, enables highly visible illumination at much lower operating voltages and is also resilient to damage due to its self-healing properties.

This innovation, called the HELIOS (which stands for Healable, Low-field Illuminating Optoelectronic Stretchable) device, was achieved by Assistant Professor Benjamin Tee and his team from the NUS Institute for Health Innovation & Technology and the Department of Materials Science and Engineering at the NUS Faculty of Engineering. The results of the research were first reported online in the prestigious scientific journal Nature Materials on 16 December 2019 and were also published in print in the February 2020 issue.

Durable, low-power material for next-gen electronic wearables and soft robots

"Conventional stretchable optoelectronic materials require high voltage and high frequencies to achieve visible brightness, which limits portability and operating lifetimes. Such materials are also difficult to apply safely and quietly on human-machine interfaces," explained Assistant Professor Tee, who is also from the NUS Department of Electrical and Computer Engineering, N.1 Institute for Health and the Hybrid Integrated Flexible Electronic Systems program.

To overcome these challenges, the team of five NUS researchers began studying and experimenting with possible solutions in 2018, and eventually developed HELIOS after a year.

To lower the electronic operating conditions of stretchable optoelectronic materials, the team developed a material that has very high dielectric permittivity and self-healing properties. The material is a transparent, elastic rubber sheet made up of a unique blend of fluoroelastomer and surfactant. The high dielectric permittivity enables it to store more electronic charges at lower voltages, enabling a higher brightness when used in a light-emitting capacitor device.

Unlike existing stretchable light-emitting capacitors, HELIOS enabled devices can turn on at voltages that are four times lower, and achieve illumination that is more than 20 times brighter. It also achieved an illumination of 1460 cd/m2 at 2.5 V/μm, the brightest attained by stretchable light-emitting capacitors to date, and is now comparable to the brightness of mobile phone screens. Due to the low power consumption, HELIOS can achieve a longer operating lifetime, be utilized safely in human-machine interfaces, and be powered wirelessly to improve portability.

HELIOS is also resistant to tears and punctures. The reversible bonds between the molecules of the material can be broken and reformed, thereby allowing the material to self-heal under ambient environmental conditions.

Describing the potential impact of HELIOS, Asst Prof Tee said, "Light is an essential mode of communication between humans and machines. As humans become increasingly dependent on machines and robots, there is huge value in using HELIOS to create 'invincible' light-emitting devices or displays that are not only durable but also energy-efficient. This could generate long-term cost savings for manufacturers and consumers, reduce electronic waste and energy consumption, and in turn, enable advanced display technologies to become both wallet and environmentally friendly."

For example, HELIOS can be used to fabricate long-lasting wireless displays that are damage-proof. It can also function as an illuminating electronic skin for autonomous soft robots to be deployed for smart indoor farming, space missions, or disaster zones. Having a low-power, self-repairing, illuminating skin will provide safety lighting for the robot to maneuver in the dark while remaining operational for prolonged periods.

Next steps

The NUS team has filed for a patent for the new material and is looking to scale up the technology for specialty packaging, safety lights, wearable devices, automotive, and robotics applications.

TRIZ Principle #30 – Flexible Films, Thin Membranes, and Coatings

The nation can't rely on private R&D for a variety of reasons.

Michael Collins  Industry Week logo

JAN 22, 2020

Just about everybody in the U.S. agrees that America must maintain its economic position and stay ahead of China and other foreign competitors, by using a strategy of innovation. Innovation is commonly defined as being first to acquire new knowledge through leading-edge research: being the first to apply that knowledge to create sought after products and services, and being the first to introduce these products and services into the marketplace. If the U.S. is committed to using innovation as its primary competitive strategy, then we must do a better job of doing the first step—leading-edge research.

Two fundamentally different kinds of research lead to innovation. The first is basic research, which is generally conducted by the federal government. The second is research and development—also called applied research—which is usually undertaken by private firms. Basic research differs from research and development because it investigates basic science and is high-risk and seldom results in commercial products in the short-term. Private research and development, on the other hand, uses the new technological ideas discovered by basic research to develop new products and is driven by shareholder value and short-term profits.

The United States has a long history of investing in federal basic research going back to World War II when federal research was used to develop radar, electronics, atomic power, jet fighters, and many other technologies used to win the war.

After the war, the US continued to invest in basic science research, which was used to create many of the technologies and industries we see today. Federal basic research was the initial research used in developing the Google search engine, global positioning satellites, supercomputers, artificial intelligence, speech recognition, the Internet, smartphone technologies, the shale gas revolution, seismic imaging, LED light technology, magnetic resonance imaging MRI, advanced prosthetics, and the human genome project. Many of these new technologies led to new industries spawning many new markets. An example that everybody understands is the internet developed by ARPANET (advanced research products agency) of the Defense Department.

The development of new technologies into useful products was accomplished by private companies, but all of these products came, initially, from federal basic research in many fields of science. As an example, transistors were not suddenly discovered by the electronics industry; they came from people working with wave mechanics and solid-state physics. Light-emitting diode technology began with the study of infrared emissions from gallium arsonide and other semiconductor alloys. Magnetic resonance imaging came from research into spin echoes and free induction decay.  

How vital Is Basic Research?

A study published in 2019 in the journal Science shows that one-third of all U.S. patents since 1970 relied on government-funded research. The study was based on an investigation of all patents issued from 1926 to 2017 and underscores the importance of funding basic federal research.

The Decline of Basic Research

According to the Information Technology and Innovation Foundation, ITIF, federal basic research has been declining for 22 out of 28 years. The following chart shows that as a percentage of GDP, federal research had fallen from a high of 2.5% in 1964 to 0.61% in 2018.

 Innovation Investment 2017.635

Source: Information Technology and Innovation Foundation – December 2018.

The chart also shows that business research and development as a share of GDP continues to grow. The chart begs an obvious question: Since private research and development has grown from 0.7% in 1956 to 2.0% of GDP in 2016, isn't this adequate to support our strategy of innovation? Do we need federal basic research? I will make the argument that the answer to both questions is no—that we need to increase the federal necessary research budget.

The primary reason we can't just rely on private R&D is that 80% of the funding of business research and development is applied research that leads to products, and the other 20% is basic research. This shift is because basic research is very risky; long-term; has uncertain applicability, and probably won't lead to short-term profits.

One of the big problems facing corporations is the dominance of the financial sector. In an earlier article, "We Must Save America’s Manufacturing Sector,” I showed that as the manufacturing sector has declined, it has been replaced by the service sector in terms of GDP and influence. Like it or not, America's multinational corporations are pushed to achieve short-term profits and shareholder value as their number one priority.

Since the financial sector is now the dominant sector in the economy, the trend to reduce costs, including R&D budgets, is gaining traction. According to the ITIF report on the ingredients of innovation, 80% of chief financial officers in the U.S. responded that they would cut R&D to meet their firms' next quarter projections. Also, manufacturing continues to move to foreign countries,  taking research and development with them. According to the same ITIF report, "U.S.-based companies now have 23% of their research and development employment located abroad.”

Many of the boards of multinational corporations are being dominated by activist board members who want to achieve short-term profits any way they can. A good example is what happened to the DuPont Company. For more than 200 years, DuPont has invented innovative products sold all over the world. Their experimental station–a research lab–created products like Nylon, Freon, Lycra, Neoprene, and Kevlar. But in 2016, their fifth-largest shareholder, Trian Fund Management, demanded the company cut $4 billion from the business and double the stock price to optimize shareholder value.

In 2016, Trian Fund Management forced a change in the company's approach to research and development. They wanted to reduce R&D costs and do a stock buyback that would lead to quicker profits. They successfully did the stock buyback and laid off 5000 people worldwide. Part of this cost reduction led to a 20% reduction in the R&D budget, the layoffs of hundreds of research people, and the closure of one research lab.

A survey of the major corporations by Sullivan and Cromwell showed that in 2018, 260 activist campaigns were going on in the U.S. During this same year, according to a July 2019 report from J.P. Morgan, stock buybacks reached $800 billion. The point here is that the emphasis is on short-term profits, cost-cutting, and stock buybacks, which can cannibalize innovation, slow growth and harm U.S. competitiveness. In this environment, there is little chance that the investment in basic research by U.S. corporations will improve anytime soon.

Current problems

1. Foreign Competitors: The U.S. is not keeping up with foreign competitors in terms of investment in federal research. Federal research is now 0.6% of GDP and would have to be increased by $100 billion to equal 1980 levels. The article, “Dwindling Federal Support for R&D is a Recipe for Economic and Strategic Decline”, says that China's investment in government research has increased 56% since 2011 and Russia's expenditure increased by 13%. During the same period, the U.S. investment in basic government research fell by 12%. The article's summary states, “this is a recipe for decline, economically and strategically.” Using the measure of basic research as a fraction of GDP, the United States was ranked fifth among all nations.

2. Federal Deficits: One of the biggest problems facing the basic research budget, or any other non-defense budget, is the federal deficit. The national debt has grown to more than $22 trillion and is constantly used as the excuse to cut federal non-defense budgets. A good example is a need for an infrastructure program to improve our highways, bridges, ports, sewers, water lines etc. This problem has been ignored for 30 years and would now cost $4 trillion to fix. Because of rising deficits and debt, it is not likely Congress will invest in the infrastructure or federal basic research, even though both are critical to the innovation strategy that is supposed to fuel America's competitiveness. The federal necessary research budget is scheduled for a $5 billion cut in 2020.

Conclusions

The federal report “Rising Above The Gathering Storm Revisited” concluded that "the U.S. appears to be on a course that leads to declining, not growing, standard of living for our children and grandchildren.”

It is difficult to see the positive outcomes of an investment in basic research because it does not lead to visible products in the short term, and the long-term consequences from not investing in basic research are masked by short-term economic successes in the economy. In the long-term, disinvestment can lead to stagnant productivity, lagging competitiveness, and reduced innovation.

The problem of multi-national corporations bending to the will of the finance sector and focusing on short-term profits as their priority is beneficial to shareholders but may be devastating to the country over the long-term. Ralph Gomory, former senior vice president of science and technology at IBM, succinctly summarized our current situation. He said, "what is good for America's global corporations is no longer necessarily good for the American people."

America's stated goal is to compete in the world market by using a strategy of innovation. But if we can't invest in the necessary research and science that creates the opportunities for new technologies, it is difficult to see how America is going to compete with the Asian countries or remain the number one economy in the world.

Michael Collins is the author of The Rise of Inequality and the Decline of the Middle Class.