Next generation lithium-ion batteries

Lithium-ion technologies are the most widely used electrochemical energy storage technology today. Last year, it received the bulk of industry’s applied research essentially focused on driving incremental improvements. Venture capital, on the other hand, invested over a half billion dollars into exploring solutions which addressed lithium-ion’s challenges through new chemistries or new technology paths to solve our global energy storage problem. Over the last few months, we have received inquiries on the market progress of those portfolio companies. Through these conversations, we noted an inconsistent understanding of battery technologies and the challenges that the industry faces.

To address this, Cypress River Advisors sat down with William Chueh, a leading material science and engineering researcher at Stanford University and his team of Ph.D.’s who are tackling the question: “How to build a better battery?” While there are many different kinds of energy storage systems, the rise of mobile devices has made lithium-ion the incumbent technology today for consumer electronics and electric vehicles. It serves as one of the major benchmarks for which all battery technologies are compared to today. We hope that this article and its related videos will give industry observers an initial overall sense of the challenges ahead with different technologies.

The ideal battery

Batteries have been around since the time of Benjamin Franklin. A smartphone battery now packs more power than the single use electrochemical cells that were once the size of milk jugs). Today, batteries are essential to our modern lifestyles. They power our phones, cars and even homes.

An electrochemical battery is fairly simple in construction. It is composed of a cathode (positive end), an anode (negative end), an electrolyte that serves as a medium to conduct ions and a separator which isolates the electrodes but allows the movement of ions. But what are the characteristics of an ideal battery?

  • High capacity and stable energy output over a long run time
  • High power to run power tools or an electric vehicle motor in the smallest and lightest form factor possible
  • Fast and consistent recharging times
  • Long life and durability
  • Safe usage under wide operating conditions with respect to temperature and humidity
  • Low toxicity during manufacturing and at end of life
  • Affordable source materials and manufacturing process


Unfortunately, no single chemistry delivers all the above desired characteristics simultaneously. The lead-acid battery in your car is impractical for mobile phones but practical to be the starter for your Mustang because it can survive a wide range of temperatures, also lead-acid batteries are the ubiquitous cheap incumbent. Other chemistries, like the vanadium flow battery, is perfect for grid applications because it can store power over a long period of time. But its strength is also its weakness, to store large amounts of stable power, you also need tanks the size of a car. What about the batteries that power our smartphones, tablets and our electric vehicles? (Click here for a recent history of the rechargeable battery.) Lithium-ion (Li-ion) batteries are relatively lightweight, can be recharged thousands of times to power your phone perfect for mobile applications, but when damaged can result in fires. At the end of the day, batteries are optimized to the applications.

Industry focus today: lithium-ion batteries

The bulk of today’s commercial research is largely focused on lithium-ion technologies.  It is important to note, there are several variants of Li-ion technology. As we mentioned above, there are many energy storage options available. Several different types of energy storage technology are receiving venture capital attention, i.e. flow batteries, silicon cathodes, sodium sulfide, advanced lead acid, liquid metal batteries and so on. Battery innovators not only developing new chemistries but also the material structure of the different parts of the battery.

If lithium-ion so popular, why are venture capitalists interested in new battery types?  There are several reasons.  The incorporation of renewables into the grid requires a new generation of scalable long-duration batteries to capture surplus power produced during peak periods. Batteries will be key to enabling grid integration and enabling the time shifting of electricity delivery, eliminating the need for inefficient and polluting peaker plants. Also given the recent incidents of lithium-ion batteries in consumer devices (and aircraft), the industry has a significant incentive to explore safer chemistries and battery structures.  In a separate article, we will discuss the differences in approach these startups are taking.  From the point-of-view of Cypress River Advisors, these are key drivers creating disruption in the battery industry long mired in incremental improvements for decades.

The challenges in the chemistry

What are the challenges in battery chemistry? Ideally, you want a battery that has high coulombic efficiency, in plain English: all the charge put into a battery comes out (subject to resistive losses). You also want stable power output that performs well over a wide range of operating conditions (temperature and humidity).  You also want a rechargeable battery that you can cycle over and over. Each new chemistry has its own limitations, the chemistry also informs the kind of packaging and safety requirements for safe operation. All these factors are interrelated and inter-dependent. Needless to say, these are challenging research problems. Let us examine at a few of the technical challenges the industry needs to solve.

Energy and Power Density

Energy density is the amount of energy stored in a battery. Increasing the energy density means you get more energy for a given battery size. For example, an electric vehicle can travel farther without increasing the weight. Higher energy density is particularly critical for connected devices where the size of the battery is constrained by the consumer demand for sleeker and thinner designs.

If you increase the battery’s energy density — less of those are needed for a given amount of energy when you can increase the energy density of the battery. So, increasing the energy density of the battery is one of the best ways to decrease cost. The most expensive components of a lithium-ion battery now come from the non-active materials—the current collectors, separators. If you increase the battery’s energy density — less of those are needed for a given amount of energy when you can increase the energy density of the battery. Take the cathode, for example, both consumer devices (lithium-cobalt oxide) and larger devices like a Tesla electric vehicle (nickel-cobalt-aluminum) use: cobalt. In 2016, cobalt per pound was $10.88 USD. At the time of writing this article, the price of cobalt has nearly doubled having around $25 USD per pound. This is all before processing and manufacturing.

That being said, researchers at UC Berkeley and Carnegie Mellon note that costs of lithium-ion batteries continue to decline, despite volatile cobalt and lithium prices. The diversity of material constituents in emerging battery technologies appears to serve as a buffer to material price shocks. Efficient assembly of battery cells and packs and technological learning may be driving costs even lower. It is possible large battery companies like LG and Panasonic cross-subsidize their battery research and development, yet the extent of cross-subsidization remains uncertain. Policy incentives in China to accelerate electric vehicle growth drive demand and subsidize manufacturing costs. The level of subsidies in China and cross-subsidization between companies remains an area of uncertainty leaving the possibility that true costs are not reflected. All things considered, when building a better battery, improving technology performance through energy density can deliver better returns than addressing dynamic lithium or cobalt prices alone.

There is a drawback when increasing energy density. Almost always, as you increase energy density the battery lifetime goes down. On a cell level, increasing the energy density means higher active material fractions. This means that the other components of the cell that help the battery to function, such as the binder and conductive additives, is decreased. On a materials level, increasing the energy density often means squeezing out more reactivity from materials, which pushes them to conditions that are less stable. If you want a rechargeable battery, you need reversibility and stability. So, as you can see there are some serious trade-offs that a battery designer needs to be balanced.

One promising area of research is over-lithiated metal oxide batteries. Here researchers are trying to solve the voltage fade issue. Another chemistry being explored is lithium-sulfur batteries, however, the “polysulfide shuttle problem” (need link) can cause self-discharge, low charging efficiencies, and irreversible capacity losses. Needless to say, battery chemistry is complex, not just for the main reaction but also side reactions. Each new chemistry has a whole host of other issues to address which we will discuss in the next section.

Side Reactions

What are side-reactions? These are secondary chemical reactions that occur at the same time as the main reaction that produces electricity. Batteries perform differently under different application and operating conditions. In hot environments, lithium-ion batteries in EVs need to be properly cooled. Otherwise, battery life and driving range are irreversibly affected. In these batteries, unwanted side reactions degrade the battery performance. The graphite anode becomes plated with a non-reactive film of solid electrolyte interphase (SEI) which negatively impacts long-term battery performance. While the graphite beneath can still charge/discharge, the SEI creates more resistance to this process, which decreases battery performance. Furthermore, the lithium that is trapped in the SEI decreases the available lithium for the battery, decreasing battery capacity.The science underlying the how the type of graphite, electrolyte composition, chemical conditions affects the formation and growth of films is still not well understood.

Gas Evolution

Side reactions can also result in gas building up in a battery packaging.  Hydrogen gas can build up) when the battery is overheated, overcharged or drained of charge for too long. These gasses can react explosively with a flammable electrolyte. Even during the normal course of a battery being charged and discharged, the movement of ions also can result in electrolyte breakdown, building up carbon dioxide gas inside the battery.  Overcharging can damage the separator, leading to sudden discharge. It is important to note that subtle defects during the manufacturing process may be exacerbated.  Over time, as pressure builds up, the structural integrity of the battery package is compromised.  Pouch cells are especially vulnerable since they do not have hard structural elements.

Structural Changes

As a battery cycle through charge and discharge, the particles in the battery can break apart due to the stresses. If a particle fractures, then it could become disconnected from the current collectors, which are usually a carbon additives. If that happens, these particles become disconnected from the battery resulting in lost capacity.

A number of companies are experimenting with nanotechnology to build electrodes that exhibit better mechanical compliance.  There is, of course, a downside.  The higher surface available for reactions also means parasitic reactions are also more likely.  With respect to packing, how much nanomaterial you pack into a given space also has an impact on performance.

Dendrites & Lithium Plating

In a Li-on battery, lithium ions are intercalated, i.e. inserted, in a metal oxide lattice. The intercalation and de-intercalation process, the movement of ions during charging and draining, can cause the battery package to expand and contract as we previously discussed. This is undesirable because it can lead to compromises in the packaging.  More importantly, lithium may also not properly go back into its lattice but instead form dendrites.  If a cell charges too quickly, these dendrites get bigger and may pierce the separator leading to a short circuit.  As we mentioned earlier, lithium may also plate the graphite anode instead of properly suspend itself back in the lattice.


All these above factors also impact the safety of batteries.  Batteries work by combining of simultaneous electrochemical reactions and physical safety measures. They have to work together to deliver: high energy density in a safe rechargeable package. The flammability of the organic electrolyte is always a concern in the consumer and transportation markets. It’s a major reason why many groups are investigating solid state batteries. Solid state batteries replace the organic solvent with a solid electrolyte, eliminating the risk of significant heat or gas buildup. Again, this approach is not without its challenges, now the lithium must be transported through a solid and the electrical resistance is a tough problem to solve. On top of this, researchers need to figure out how to create a cost-effective deposition/synthesis methods.

You could comment on the difficulties of solid state batteries. Now the Li must be transported through solid phases, rather than the liquid electrolyte, and the resistance from transport is a tough problem. Furthermore, cost-effective deposition/synthesis is another issue.

The challenges in performing research

As we alluded to earlier, even the particle size of the components can impact the battery performance. By now, you can see the interplay of physics and chemistry is complicated.  Batteries are an assemblage of composite materials. Both anode and cathode are porous composite materials (containing active material, binder, and conductive additives). Component materials may be contaminated during the manufacturing process leading to unexpected side reactions. Particle size and shape can also add to the variability. Furthermore, the reactions in the battery do not occur uniformly throughout the electrodes.  In the SEI example from above, the passivating layer is extremely fragile. It is formed in situ during the first charge and discharge cycle. But if you try to open the battery and perform tests, it changes or falls apart.  That makes it that much more difficult for a researcher to identify a solution.

So how can you observe the changes in a battery in situ?  Will Chueh’s material sciences group at Stanford have gone so far as to use synchrotron-based x-ray techniques at Stanford SLAC and Berkeley National Labs Advanced Light Source to observe these changes to nanoparticles in battery components.  As you can imagine, obtaining access to X-ray microscopes and performing these experiments is not easy.  Only a few academic groups and large corporations have the ability to conduct these types of tests.

Another challenging aspect of research is simulating long cycle times.  For example, your smartphone battery is expected to last several years which is equivalent to at best a thousand cycles.  To make sure a new type of battery will perform to specification is time-consuming, to say the least.  So companies and researchers use specialized test equipment to simulate various conditions found around the world.  To hasten the test process, they test banks of batteries at elevated temperatures and compare that against room temperature control. Even compared with normal cycling, running accelerated tests are still time-consuming. Simulating real performance under a variety of heat and moisture stressors also remains a serious challenge for batteries used in vehicles or for grid-scale applications.

The reality of the energy business

Building a better battery takes more than assembling different chemistries and reading out the voltages. A wide range of factors impacts a battery’s performance and lifetime. Researchers need a way to understand the reactivity at different specific places in the battery.  If you want to understand what is fundamentally happening to the materials in the battery, you need more sophisticated tests to drive the science forward.

Moreover, the reality of the energy business is dependent on one thing: cost.  As Professor Chueh notes, “What battery technologies we use for a given application will depend on the cost structure of the specific technology produced, stored and utilized.”  It doesn’t matter whether the power source is renewable or not, the challenge scientist face is to develop technologies that are competitive in the market

The New Economics of Space

Last week, a record launch of 104 satellites reached orbit on a single rocket. Eighty-eight of those satellites are from Silicon Valley start-up Planet Labs. Space used to be exclusive domain of nation states and the likes of NASA, not anymore. The space services business is now a USD 330 billion dollar business where the use of commercial-off-the-shelf parts, miniaturization and new players bring cost saving such that high schools can send a payload into space. Why is this launch so significant?  It’s not just the number of satellites.  The Planet Labs constellation of satellites can image the entire planet in a 24 hour period.  This heralds a change in how the business will be done from commodities trading to open source intelligence.

How much does it cost?

According to NASA, it costs the American taxpayer on average 450 million dollars to send the shuttle into orbit. For corporations sending broadcast and telecom satellites, depending on the payload and orbit desired, it is roughly one third to two thirds the cost of the space shuttle

Traditionally (again depending on your payload and orbit), launch costs account for 35-40% of overall budget. But that is just sending your payload into space. Satellites builds must handle the tremendous g-load and shaking during the first 8 minutes of launch. This is no small engineering feat.  Consequently, satellite build costs account for 50% of the operating budget. There is also a lengthy approval process when acquiring spectrum. No spectrum. No satellite. This adds another five to six percent. Lastly but not least, insurance costs can equal 10% of overall costs, depending on the failure rate of your launch provider.

New kids on the block

Elon Musk’s dream of re-selling used rockets at discount has led the launch industry to re-examine its one-time use rocket manufacturing process and its adherence to the cost-plus business model. A thirty percent discount on a 50 million dollar rocket appears is attractive. From a capital intensity perspective for SpaceX, launch vehicle reuse also attractive, provided they can manage the risk levels associated with refurbishment, their designs are modular enough to incorporate new technologies to improve performance, not withstanding the insurance costs.

But SpaceX isn’t the only new satellite launch player in the market. Rocket Labs in New Zealand is focused on delivering small sats into space using rockets engineered from composite materials. Rejecting ULA and SpaceX’s push toward reusability, Rocket is focused on reducing the cost of launch infrastructure, also increasing the frequency of launch for small sats. Their price point? Rocket Labs estimates their launch prices will be in the single-digit millions for low earth orbit. They are currently testing their two-stage Electron for LEO payloads up to 150 kilograms. Why are they focused on small satellites you may ask? This is the sweet spot for growth in the space industry, which we’ll discuss in our next article.ugh to incorporate new technologies to improve performance, not withstanding the insurance costs.

Why are they focused on small satellites? This is the sweet spot for growth in the space industry, which we’ll discuss in our next article later this week.

Reality Check on Intelligent Assistants with founder Dennis Mortensen


Cypress River Advisors’ Reality Check Series is back!  CEOs from bleeding edge of tech tell it how it is.  Cypress River’s Lubna Kabir asks Dennis Mortensen, CEO of, about Amy and Andrew their intelligent assistant. is one of several tech companies focused on building vertical AIs.

Weeks ago, my assistant worked with Andrew to set up a time and place for the shoot.  Andrew is an application built upon a neural network specialized in natural language processing.

Enjoy the clip!

Enterprise IoT is sexier than you think

Ok, only investors and analysts think enterprise IoT is sexy. But since I have your attention…nearly a year ago Verizon launched Thingspace, their platform-as-a-service IoT play. The idea was to simplify and reduce the cost of IoT development.  Investors have long accused carriers of becoming no more than utilities. Verizon is an example that shows they can still have moxie.  These are the kinds of platform investment activities we at Cypress River like to see from the carriers.  Carriers can deliver platform-as-a service initiatives that:

  • simplify the effort for enterprises to deliver IoT initiatives at scale
  • builds in transport level security from the outset regardless of method
  • forces enterprises to long-term issues like IoT device retirement

A year on, Verizon is making progress.  Verizon just announced their partnership they hinted at MWC Barcelona.  Like Sigfox, they are pushing down the value chain into the chip.  Verizon is working Qualcomm to integrate ThingSpace’s services and APIs with the Category M chipset.  These chips are geared toward low power and low bandwidth applications, for example: LTE enabled water and gas meters, SCADA systems, point-of-sale or even asset tracking.

Yes, this largely is an enterprise play.  In our view, we find the enterprise segment most interesting.  Deploying IoT at scale is not as easy.  Firmware management, diagnostics, data transport security and provisioning is a major headache to the enterprise and ultimately the consumer.  The ability to facilitate these activities over-the-air is efficient at scale. Take for example the Amazon’s Kindle. The consumers have benefited from downloading books anywhere with a 3G signal.  Now, some of you may be thinking: “Is IoT making money?”  Verizon made a cool $690 million in IoT and telematics last year last year.

Verizon is one of the few carriers that have embraced their future role as a secure platform-as-a-service provider.  We at Cypress River would like to see more mobile operators follow this model, even on a collaborative basis.  For IoT to be successful, enterprises need cheap and secure services to reduce the overall cost of development.  Carriers also deliver value to the consumer by enabling built-in transport layer security.  Nobody wants: this or this.



What cyber warfare looks like

If you want a sense of what cyber warfare looks like, this is it. Warfare is not just waged with guns and bombs anymore. It is getting into critical infrastructure and turning off the power and consequently water. This is prepping the battlefield by taxing civilian support services, sowing confusion and degrading a counter-response.

Your business sits on that battlefield where there are no borders.  Your customers and shareholders are collateral damage.  Endpoint and cloud security are not just the responsibility of your CIO and CTO, it is a CEO and Board responsibility.

There are no magic bullets. There is no single solution. It is a constantly changing game. You need to invest in defense in depth. It is hardware, software, processes, and people ensure you have the ability to mitigate and recover quickly.


Here’s a clip from our interview with Chris Stott, CEO of Mansat.  Chris is a spectrum management and space industry veteran.  He geeks out like the rest of us in tech but reminds us that the emphasis in education should be STEAM, not STEM.  Without the arts, we can’t inspire our kids to dream.

Mung Ki Woon, OmnyPay Chief Partnership and International Officer

Mung Ki Woo on why “retail pay” makes sense

Mung Ki Woo, OmnyPay Chief Partnerships and International Officer


In his February 2016 letter to shareholders[1], Sears’ Chairman, Scott Lampert, wrote that retailers were profoundly feeling the “disruptive changes from online competition and new business models.” In response, Sears would be focused on “integrated retail” or what many others in the industry are also calling “omni-channel”, ie. the seamless blending of their digital and physical sales channels to create new customer experiences.

Today, there is a real dichotomy between most retailers’ online and in-store consumer experiences. After years of massive effort, many retailers have revamped their web and mobile storefronts. Online, consumers enjoy well-thought-out user interfaces. Browsing and selection are easy. Consumers with store accounts enjoy a personalized experience. Retailers have learned to leverage browsing and purchase history data to create product recommendations, offers, loyalty point earning/redemption and preferential pricing for loyal customers. The end result is increased sales and profits, by minimizing friction during shopping and at checkout. Unfortunately, the same cannot be said about the customer experience at the physical stores.  Moreover, eMarketer recently estimated that 50% of all e-commerce transactions would be done via mobile by 2017.[2] That is why many retailers are investing heavily in online-to-offline systems via mobile applications to augment their physical store experience.

In that context, it makes sense for retailers to enhance their mobile apps with loyalty, coupons, offers, and payment — all performed in a single step. During the holiday season, neither consumers nor the stores, want people in line fumbling with coupons, cards or mobile wallets. There is tremendous utility expanding the retailer’s app beyond a simple marketing device. In the store, it is a means to offer substantial consumer convenience while enabling in-store conversion

Let’s take a deeper look. Today, the consumer has to look at several screens: the phone, the display of the cash register, and the payment terminal. Each screen provides only part of the information the consumer needs. The consumer has to perform several different physical actions: present the phone, the coupon(s), the plastic card(s), press buttons on the PIN pad, enter information (or provide this information verbally), sign, etc. Just writing this paragraph is tiring!

If the consumer’s mobile phone were integrated with the point of sale and the merchant back-end systems, then the consumer could seamlessly create a simplified user experience.  Services provided by the retailer (promotions, loyalty, payment) would be seamless.  The consumer experiences reduced the time and complexity all the way to check out. It just makes sense.

Moreover, many consumers participate in the retailers’ branded payment card (either a closed-loop private label card or a co-branded card affiliated to one of the payment networks). But, consumers often forget their store-branded cards at home. Once digitized in the mobile application, consumers will always have this card available with them. Growing the user base of the store-branded card really matters for retailers, because when a consumer uses such a card, not only does the retailer save on the cost of payment, the retailer generates revenues from the partner bank which financially issues the card. For example, Synchrony, one of the main players behind these store-branded cards paid out 2.7 billion USD to its merchant partners in 2015[3].

In response, several major retailers have taken action and introduced their own “Retail Pay” services this year. Recently at Money2020 in Las Vegas, a wide range of speakers addressed this pain point. I have no doubt that other retailers will join the fray soon. Retailers do have to act. Consumers are establishing their mobile shopping behavior now. Tomorrow, the game will have been played.  –Mung Ki Woo

[1] Source:

[2] Source:

[3] Source:


Mung Ki Woo is the Chief Partnerships and International Officer of OmnyPay where he brings over two decades of digital technology and payment experience to the team.  Prior to OmnyPay he was an Executive Vice President of MasterCard responsible for the development and commercialization of mobile payment product platforms and solutions around the globe.  He and his team built tokenization services used by “Brand Pay” services (eg. Apple, Android, Samsung, etc).  He previously served as Vice President of Electronic Payments and Transactions at Orange, where he created the “Orange Money” mobile payment program across Africa and The Middle East. The program had 10 million customers and 4 billion Euros worth of transactions as of 2014. Woo also guided the deployment of mobile contactless services in Orange’s European operations. He is a graduate of Ecole Polytechnique and Telecom ParisTech and speaks three languages. Woo is also a Visiting Professor in the Design Management Program at the Pratt Institute


Making sense of AI: Machine Learning and Deep Learning for investors…

Machine Learning “is a type AI where the ability to learn is not explicit programmed.” – Arthur Samuel

Ask any developer, writing code is a time consuming and tedious task. But what if you could just teach your computer by showing it rather than write thousands of lines of code?  As discussed previously on the CRA Blog, there is an exciting number of artificial intelligence plays in the market, particularly in machine learning.  This year alone we’ve seen several large investments in startups at various stages.  Machine learning innovation isn’t limited to the startups.  The Big 6: IBM, Google, Amazon, Microsoft, Apple and Samsung, all have a number of specific initiatives some of which are already in use by consumers.  In September they formed the consortium, Partnership on AI.  Even Uber has their own AI lab, built on their acquisition of Geometric Intelligence.

The current set of AI startups can be broadly organized into horizontal and vertical AI plays.  Siri, Cortana, Google Assistant and soon Viv are horizontal AI’s you are familiar with.  These early “Voice OS” platforms can answer simple questions like “What’s the weather for Wall Street?” and perform simple tasks like setting a wake-up alarm.  Vertical AIs are focused on solving one problem very deeply, e.g. natural language parsing for Urdu, fraud analytics for card transactions or in the case of Snap’s filters: recognize the outline of your face so you can vomit a rainbow.

Many of these services are enabled through deep learning algorithms, a subset of machine learning techniques. Deep learning mimics your brain’s neural networks.  While we could spend several posts discussing perceptrons, convolutional or feed forward networks, the important thing to remember is that Deep Learning is an effective means of “teaching” a computer program to perform a task.  Furthermore, given the current pricing and availability of computing power and memory, it is significantly cheaper and widely available such that even your mobile phone can run a deep learning algorithm.

“A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance tasks in T, as measure by P, improves with experience E.”  — Tom Mitchell, Carnegie Mellon, School of Computer Science.  

Andrew Ng, Chief Scientist at Baidu, is fond of quoting him. This is more nuanced definition than the one by Samuel at the top of this article. It shows deep learning is nowhere close to the way our brains perform.  When you conceptualize machine learning this way, it is easier to understand its limitations.

What can’t deep learning do? Deep learning techniques can’t carry out long chains of inferences to arrive at an answer unless specifically programmed.  In most cases, there is usually a range of answers depending on the problem being solved.  Deep learning models also need to be programmed to factor in time. This is a significant challenge for parsing and more importantly comprehension.  For example, if you thought following Lost’s storyline was difficult, imagine how hard it would be for an AI.  Today’s AI’s are rudimentary approximations of our brains.  Furthermore, almost all the successful deep learning applications used supervised learning with massive amounts of human annotated data.  This is a scale issue for scientists and investors alike.

Until AIs can learn by themselves at scale, we won’t be seeing Westworld anytime soon.  The most visible progress has been Google DeepMind success in using adversarial reinforcement learning techniques, i.e. the AI learns by playing against successive versions of itself.  DeepMind’s AlphaGo program battle with Go champion Lee Sedol shows the possibility of the technique.

Despite the current limitations of deep learning, in a few short years, deep learning has significantly improved the accuracy of computer vision and natural language processing. Geoffrey Hinton, a prominent AI researcher at Google, believes that deep learning “will be put on a chip that fits into someone’s ear like a real babelfish.” If voice is to be the operating system of the future, then neural networks and its associated techniques are foundational capabilities to enable contextual computing.

This is something Cypress River has spent quite a deal of time analyzing. As an investor, you may ask, “Do I place bets on horizontal or vertical AIs?”  Cypress River’s research team has built our own graph of the leaders in the field, their areas of research and their students.  For new players, finding a horizontal play will be difficult, given the hollowing out of AI scientists from the universities.  The giants of deep learning: LeCun, Hinton, Manning, Ng et al., as well as their graduate staff all are pursuing research opportunities with the Big 5.

What to do then?  Well, it is not hard to imagine that vertical AIs play will “plug into” a horizontal AIs in the near future. With this in mind, Apple and Amazon have already begun building and evangelizing with developers their APIs to plug into Alexa and Siri.  The success of a vertical AI will be driven by the startup’s ability to solve scope and scale a sufficiently narrow enterprise or consumer problem. Take a moment, watch Cypress River’s recent Reality Check interview with Dennis Mortensen of Their intelligent assistants, Amy and Adam, are focused on doing one thing well scheduling.  If, and their vertical brethren are successful, the exit opportunities are numerous. If you have more questions about AI and its impact on the industry, reach out to the Cypress River’s partnership or research team, they will be happy to chat with you.

IoT Security is a CEO Problem

As you read this sentence, the Mirai botnet is attacking Deutsche Telekom’s routers. The infection has spread to Brazil, Britain and Ireland. Right now, Reuters is reporting 4.5 percent of DT’s fixed line customers don’t have service.

Whether you are in China or America, the internet is the primary mode of growth and essential to the normal course of daily business.  Without a doubt, today the Internet is an essential infrastructure resource, like electricity and water.  The internet of things is its natural expansion into the real world.  It is the growth story from companies  Apple to Zurich Insurance.  In a few short years, we have seen a flood of web-enabled wearables, medical devices, home security devices and intelligent assistants integrate our homes, health, and businesses. Ironically, if I look back at this past year, nary a month goes by where some massive data breach, hack or DDOS that has taken a portion of the internet out.  Just this week, the SF Muni was hacked and ransomed.  While the news is calling this as a warning sign for other cities, this should be a warning sign to the CEOs of any company that has any aspect of IoT, SaaS, and PaaS in business.  You are probably wondering why Cypress River Advisors, a strategy firm, would raise this issue.  Traditionally the board room has treated information security as the domain of the CTO, the problem is:

Information security is a CEO problem, not just a CTO problem.

As markets evolve so must corporate business strategy.  The same applies to information security, except now, it needs to be part of your business strategy.  The CIA (confidentiality, integrity and the availability) triad, guides the policies of a company’s infosec posture.  All product and services will move toward cloud-based services in some form or fashion.  The CIA triad also defines the customer relationship.  The consumer, regardless of the terms of service, has an implicit expectation that their data to be always available. They also expect the confidentiality and integrity of their data to be maintained, as well.

Consider the Target situation.  Hackers breached an external vendor that supported Target’s HVAC system. Using stolen credentials, they gained access to Target’s web systems which were in turn connected to a point-of-sale system. Target, perhaps in their desire to mine data, maintained a database of customer information, credit card, and customer verification value.   Everything is connected.

It is not hard to imagine an IoT product or service suddenly finding success where management suddenly realizes their security infrastructure can’t scale but goes ahead anyways.  Ask your dev team, it is incredibly hard to build in security after the fact. In fact, it has happened already.  In October, the Mirai botnet, the same Malware strain used to attack a security research was applied against DYN, a subsidiary of Oracle.  Poorly secured IoT devices, specifically DVRs and IP cameras made by a OEM supplier, disrupted affecting the eastern seaboard of the US.  In the rush to profits, it not surprising to see some organizations not incorporating infosec best practices.  Companies white labeling or incorporating XiongMai Tech’s products are probably feeling the impact at the bottom line.

You can’t relegate all infosec responsibilities to just the CIO/CTO.  Information security connects to all aspects of any organization delivering services via Internet. Implementing ISO certification or PCI-DSS checklists or purchasing a next generation firewall isn’t enough.  Everyone has a powerful computer in their pocket.  Your company BYOD policy means vulnerable devices are running on an office network.  Infosec requires more than hardware and software, there are people involved.  Humans are infinitely easier to hack and doesn’t require any tech.  Kevin Mitnick used social engineering to hack people for years till he got caught.

My personal nightmare scenario is a product that is rushed to market where the CEO wants a biometric solution to secure an IoT device but the coder didn’t implement properly or used an untested software component.  Why does that scare me?  If someone can successfully hack the endpoint device and recover their biometrics, they have keys to the kingdom.  You can’t revoke your fingerprint unless you cut your finger deep enough to scar it.  What if the biometric was your voice print?

There are a number industry organizations that are attempting to tackle the issue from different perspectives.  From the standard perspective, the Open Connectivity Foundation released their initial standards in October.  Samsung, Intel, Microsoft, Qualcomm and a few others participate.  From the mobile wireless perspective, the GSMA released their IoT Security Guidelines and self-assessment.  Consider CISA or CISSP training for the management team and your staff. I myself am a CISSP from the early days of the Internet.  Both programs provide training programs for everyone from the c-suite to the vendors.  I also recommend you take a half hour an watch Morgan Marquis-Boire, talk about data contraception.  Morgan is a well-known security researcher and is the fellow responsible for protecting journalists at First Look Media.

It is about establishing a company culture and process that cuts across all business operations from the design of your product to your vendors.  The truth of the matter is this: if it isn’t a little painful, then you probably aren’t doing enough.  Information security takes practice, training, and maintenance to implement right.  Your consumers are creating all kinds of data.  You may not be even monetizing it.  But if you improperly handle it and lose it, you surely will feel it your brand equity and the bottom line.


Digital Health: Hacking the patient

It’s not about the technology. It is always about the patient.

A few weeks ago, a passenger on my flight to Tokyo fell ill.  Fortunately, my bag was filled with FDA approved IoT enabled diagnostic equipment.  I had just presented to senior executives on the future of FDA approved IoT devices. (Yes yes, I know I am a geek.) The doctor that was with me found the ECG and heart rate monitor useful in the evaluation of the patient. Thankfully the passenger made it safely to treatment in Tokyo. In all my years of being a road warrior, a situation like this happens perhaps only once in a blue moon.

Last week, it happened again.

On my flight back from London, the in-flight service manager called out for a physician. I grabbed my bag and quickly headed to the rear of the aircraft. By the time I got there, the patient was already surrounded by an obstetrician and a cardiologist. The passenger was pale, sweating and weak. She complained of blurred vision and nausea. Ultimately, it was determined passenger was hypoglycemic (low blood sugar), a serious condition for any diabetic. Fortunately, they were able to treat her and she made it home safely.

This incident is a reminder that for all the diagnostic technology that we have at our disposal, ultimately it is about the patient. Compliance to a treatment protocol is key to addressing the health needs of the patient. In all the years I have worked with startups, far too many executives focus on the technology not the user experience. Your technology platform can have all the best sensors, deep learning analytics and clever packaging. However, if your user experience doesn’t drive patient compliance, your tech is just that – tech. Startups need to remember that it is about the patient.

Patient compliance, not the FDA is the hardest nut to crack. If your technology platform cannot consistently help patients manage their conditions and achieve optimal clinical outcomes, spend some serious time examining how you manage the patient experience. If a physician has a hard time with patient compliance, then your technology team needs to has to work that much harder.

Ultimately, If we as an industry want to talk about personalized medicine and changing patient outcomes, we must focus on hacking the patient. This includes aspects like automated drug delivery that is just the right amount, facilitating remote monitoring by authorized caregivers, other new methods of engaging and collaborating with the patient.

Medicine is a human experience and no one should forget that.

A funny thing happened on the way to Tokyo: a wearables story

Hello EEGEarlier this month, I flew to Tokyo on a Hello Kitty plane. It is an almost annoyingly cheerful Boeing 777-300ER decked out with Hello Kitty branding, right down to the embossed toilet paper and bow shaped carrots in your in-flight meal. It is a short flight, just long enough to take a quick nap.

Mid-flight I awoke to a commotion. I opened my eyes and saw the passenger in the row next to me having difficulty. He was pale, feeling dizzy, sweating and weak. The passenger clearly was in discomfort. The in-flight service manager quickly called for a physician over the PA.

As the physician began to assess the passenger, I realized that the doctor did not have the usual tools at hand, e.g. stethoscope, blood pressure monitor, pulse oximeter, and electrocardiogram. Funny enough, I had a pulse oximeter, an electro cardiogram and heart rate monitor with me.

So, I am not a physician. I am a management consultant that works on behalf of Silicon Valley’s VCs. I specialize at intersection of mobility and tech.  The most medical training I have is from when I trained to be a first responder long ago. Other pursuits led me elsewhere but it left a lifelong desire to identify portable diagnostic technologies that reliably improve patient outcomes. But I digress.  Fortunately for the doctor and patient, I just presented my assessment the future of mobile healthcare to some executives which included a demo of FDA approved devices.

On my wrist, I had an Apple Watch. On the back of my phone, I had Alivecor’s FDA approved ECG.  In my bag, I had a pulse oximeter made by a Taiwan OEM.  It was easy for me to place my Apple watch on the passenger’s wrist to begin heart rate monitoring.  As for the ECG, it took a bit to re-sync the device to my phone but the doctor got the reading he wanted.  (As a matter of disclosure, I am not in anyway compensated by Apple or Alivecor or their investors.)  The doctor was relieved to have some of the diagnostic equipment needed to better examine the patient. The ECG was able to give information regarding arrhythmia.  The Watch sensor could maintain rhythm monitoring.  The pulse oximeter could determine the patient’s O2 saturation.

As for the passenger, to everyone’s relief, he received the medical care needed to get him safely to Tokyo.

Those of you who know me, I have taken a clear position on wearables and mobile healthcare. Non-FDA approved wearables are at best wellness products selling aspirational health.  They are a great means to encourage a healthier lifestyle but they are not medical devices.  FDA approved devices require rigorous clinical testing to prove their claims. As frustrating as it may be for wearable startups and their inventors, the FDA rules and processes protect the public from design errors, inconsistency, and quackery.

Now you might think based on this experience, I would be gung-ho about wearable startups. Frankly, it really depends.  For example, Withings a French startup recently began selling a 100USD FDA approved infrared temporal artery thermometer.  It may seem magical relative to the glass thermometer. It is certainly beautifully designed and the integration with the handset is wonderfully done.  But it is not new technology.  I have one at home, which I bought for a third of the price years ago. Temporal thermometers have been used by hospitals and clinics for years.  Price, design, convenience, and consistency are the hallmarks of a useful medical device. This requires a tremendous number of iterations.  For the most part, I am more inclined to evaluate wearable startups that are seeking for FDA approval and prepared for its rigors.   Startups need to be well capitalized and properly resourced to meet the clinical testing demands of the FDA.  Startups need physicians to take a prominent role the development of the product and the operations.

Today’s wearables are just a well-designed step in the long iterative process of product development. It is still early day for mobile healthcare, especially for FDA approved diagnostic equipment, but there certainly is promise for better patient outcomes.