How smart is it to use smart technologies?

When I mention that I work in the Information Security space, I frequently get a response like “oh, that’s great, because that’s all the new stuff.” It seems everyone thinks cyber security came onto stage in the last 5 to 10 years, which is when they would’ve heard about security breaches. But is it actually that new?

I remember movies from the 80s that dealt with computer hacking and cyber warfare. In the real professional world, one of the most common vulnerabilities in web applications was the SQL Injection, which was first explained and indexed in 1998. SQL Injection is rated as “easy to exploit” and with “severe” impact, meaning it is a very dangerous vulnerability. It has been 22 years since then, so where are we now?

New day, same problems

Well, some things have stayed the same… OWASP ranked injection flaws (the type of flaw SQL injection belongs to) as #1 in their Top 10, the ten most critical web application security risks for 2017 (also at that spot in the 2007, 2010 and 2013 editions). Injection flaws means a malicious user can make an application execute statements it was not supposed to execute by injecting them in a way that the application is tricked. This is possible because the code was not written properly. The consequences? You name them: a) unauthorized access, b) lost control of a server, c) database dump, d) all of the above…

How can that happen? How is it that we have evolved the technologies to build web applications from what we had in the 90s to what we have now, yet we still can’t figure out how to code to prevent an injection flaw? Laws and regulations on this matter are not new; HIPAA came to the light in 1996, GLBA in 1999, PCI DSS in 2004, all with the goal of protecting consumer data and sensitive information. That’s more than 20 years ago, and still we are trying to solve the same problems, not to mention the new challenges technologies have brought to the table.

Internet of Things (IoT) is an interesting concept, one in which billions of smart items can be connected, and the convenience of sharing data helps to make life easier. And it certainly does! Of course, is easier to take a modern car to the dealer and plug it to have the computer report what is wrong. It is very convenient to have my house controlled through my cell phone to turn on the lights or let someone in remotely. It helps to see who is knocking on my door using live stream video from the security cameras while I am away. It is handy how we can use a voice agent to turn on the TV, put on some music or even look something up on the Internet. It is lifesaving when a pacemaker or an insulin pump can be controlled and automated to stabilize a person’s health condition. All that is simply awesome!

New day, new problems?

Let’s get to the “what-if” part of this conversation...

  • What if my home assistant is listening, and probably recording what we say at home? Maybe I am not discussing plans to commit a crime or conquer the world, but it is still my private life. After all, it happened before with Samsung Smart TVs. (See here)
  • What if a person can sit in their car close to your home, just in range of your Wi-Fi, exploit a security vulnerability in the Amazon Ring and gain control of your home security system? (See here)
  • What if not only the car dealer can connect to your car, but also someone else can take advantage of its insecure design to remotely connect and completely control your vehicle? I am not talking about only the radio or the AC, but the brakes, the engine and the steering wheel, too. (See here)
  • What if the fancy and convenient features in a new car, like using a key fob to open and start a car, can be used by malicious people to steal it from you even easier and faster? (See here)
  • Moreover, what if an evil person could give you a shock, or prevent it from happening when needed, by controlling your pacemaker’s activity? (See here)
  • …Or control an insulin pump the same way? (See here)

We are still living in a world where things are created, and then security is patched after the fact. Unfortunately, security being patched after the fact could mean after information or controls are stolen, after privacy is compromised, after accidents happen, after children go missing, after people die.

So should we stop using these innovative technologies? In my opinion, it's not about not using it, but about measuring the risk it implies and taking the proper steps to protect yourself, either when configuring the equipment, or with additional caution that you personally need to take to avoid or mitigate these down sides.

New problems, new solutions

secure IoT locked IoT unlock

Since IoT is becoming a worldwide phenomenon, shouldn't we already have professional security testers inspecting IoT products before they go to market? The US has the FDA to test and approve drugs before we can acquire them, and the FAA to validate new airplane designs and features, so I do not think I am that out of line.

In the end, companies need to understand that security testing NEEDS to be part of the product development cycle. It should be our responsibility as a society to demand that. 

If those post-development tests affect the cost of product development, companies would need to invest more on built-in security from the beginning to lower their costs and accelerate their go-to-market plan, which would give them a competitive advantage. The increased competition would subsequently make built-in security an industry standard, meaning more secure products for more members of society. 

As my uncle John used to say: “You may say I’m a dreamer,” but I would like to add the word “Secure” and have SIoT in our world. Otherwise, if the risk is too big, I might end up opting for the huge inconvenience of taking out my cell phone and wasting a few clicks to play music on my “dumb” Bluetooth speaker... just sayin'.

 

It's Time to Evolve.