Don’t think I want to bluescreen my car!

Well this is a little scary! The tradition with security vulnerability in software and computer hardware has been ‘ship it and fix it later’. For the most part this worked. It was responsive to business and it also realistically didn’t matter a great deal. Yes, your computer application might have a security vulnerability which may or may not be exploited by a bad actor, but even if it was the impact was generally minimal. If it did happen to impact you then you could “purchase support” at a reasonable price and everyone wins (that was probably a bit cynical of me.

However, today more and more lives are depending upon the same vulnerable technology. Will the same methodology of ‘fix it later’ work when we are talking about medical monitoring equipment or vehicles in busy highways? Somehow I think we are talking about a tipping point here where a new paradigm of how we approach this problem needs to be found and found very fast.

Hopefully markets and sense will prevail and find a solution, but if we look at this historically I think we might be in for a rough time.

https://www.nytimes.com/2018/10/11/opinion/internet-hacking-cybersecurity-iot.html 

Information Warfare at a Competitive Price

I guess this really shouldn’t be a surprise that this happened. We already knew that FaceBook were conducting experiments on users judging emotional responses to positive/negative articles in newsfeeds. At the time I don’t think many really extrapolated this as a possible end result however.

What FaceBook has done isn’t (I don’t believe) inherently wrong, it has simply put a power information warfare tool in the hands of those who want to buy it with few controls to govern its usage . Whether this was intentional or not will never be known of course, but the fact remains the capability has been created and is now in the wild.

This article isn’t particularly in depth on the issues this brings up but it does describe a very simple example of why this capability matters and why something should really be done about controlling its usage.

https://www.newyorker.com/news/news-desk/cambridge-analytica-and-the-perils-of-psychographics

The spy among us

An interesting and balance article about the potential risks for digital assistants. With Amazon Echo now available in Australia we have the gamut of choice when it comes to our digital assistants.

While I don’t think any of the products on offer today necessarily create a significant risk, they do introduce a vulnerability that can be exploited. You will be providing a cloud enabled device with a lot of information about yourself. You are also relying on security practices of the company that provides the digital assistant. As recent years have shown, few if any companies can claim to be perfect in the area of data protection.

The one thing that I think will be on the horizon with these that we haven’t seen as yet is the introduction of these devices into the business world. I personally can’t conceive of a justification for them today, but if you think back a few years we could probably have said the same about WiFi and the iPad!

Looking forward to this brave new world.

https://worldview.stratfor.com/article/surveillance-operative-lurking-living-room

Web 2.0 Redux

I think this comes down to a case of properly recognizing your IT asset and securing it appropriately.

The introduction of social media (also known as Web 2.0 back in the day) into government was in my view mostly reactive and an attempt to ‘be hip with what the kids are doing these days’. Things have certainly matured over the years and for the political classes social media is a very valid medium to communicate with the public and most handle the messaging side of it very well.

Where I think things have gone a little astray is understanding its value as an IT asset. Social media accounts for most platforms were never developed with the intent of being an official channel of communication for any organisation or political entity. The security originally was at a level appropriate to a personal internet service. All the major platforms have of course adapted to the new environment and introduced better security measures to protect their product (note, not users or customers….product, but that is another conversation entirely). But technology without process will never succeed.

Organisations and public figures need to understand that security of their ICT systems also include the systems that don’t actually belong to them.

https://www.theaustralian.com.au/national-affairs/politicians-warned-to-use-higherlevel-security-on-social-media-accounts/news-story/78ee468e47e5fa042b3d74a22dcf9e29

From little things, big breaches grow

With everything being connected these days and recording for our convenience and future reference, these types of data mashups are inevitable I suspect. It does go to show that sometimes the smallest and seemingly insignificant piece of electronics can lead to a very significant security issue.

The report referenced in the article also provides some additional interesting insights (https://www.gao.gov/assets/690/686203.pdf)

https://www.theverge.com/2018/1/28/16942626/strava-fitness-tracker-heat-map-military-base-internet-of-things-geolocation

In 2018, you don’t listen to your phone, your phone listens to you!

This article. while probably not much of a surprise – we have seen this type of thing before, does highlight a couple of valuable points to consider.

Your phone is essentially a listening device that permits you to make phone calls. There is a denial by the agency that this took place, but given the research behind this and the fact that if you search for spying/eavesdropping apps in the Google Play store you will find a selection of spying apps that you can purchase today, I think that is reasonable to assume this is more than plausible. If you are a nation state or just needing to conduct sensitive business discussions, phones can pose a risk.

Your users are your weakest link. No matter what sophisticated countermeasures you put in place they will always be undone by a user wanting to see the animated dancing bunny or some other cool thing on the Internet. Security awareness training can help, but it is sometimes not sufficient these days. Analyzing the environment the users are in and adjusting security controls appropriately is sometimes needed.

Don’t trust the app store. Google has had major issues over the years but Apple is not immune either. Both are getting better, but so too are the attackers.

https://www.nytimes.com/2018/01/18/technology/lebanese-intelligence-spy-android-phones.html

Meet the plot to the next Speed sequel…..

With the Amazon Echo set to launch in Australia next month it is a good time to take stock of how much integration is really a good thing. The article doesn’t really provide any revelations that would surprise anyone familiar with cyber security, but a cyber attack while sitting at your desk will have a less kinetic effect than a cyber attack while you are travelling at 80km/h on a busy highway.

I don’t think that security should be a reason to never consider a technology, but a real security by design approach should always be the focus in today’s environment.

https://www.nytimes.com/2018/01/25/business/amazon-alexa-car.html

A master key for all

The topic of law enforcement access to encrypted devices has again reached the media (http://www.smh.com.au/world/fbi-chief-calls-phone-encryption-a-major-public-safety-issue-20180109-h0fwz1.html). Something that seems to appear with increasing regularity.

The ironic thing here is that if the various calls from different governments were successful in implementing some form of backdoor to encryption, they would be in just as much trouble as the public who are railing against such changes. Probably more. I am certain the inevitable vulnerabilities and flaws introduced by such changes would be targeted to government information with more devastating impact than that of the average Joe/Jane Citizen.

This is a complex issue and unfortunately one that gets sidelined by emotive arguments from both camps. The ‘for’ camp usually cite the issues of terrorism and child protection (the two most unassailable arguments and rarely addressed directly by opponents). The ‘against’ camp usually cite privacy and government overreach into the lives of its citizens (a valid argument of course, but not one that addresses the mains issues).

I don’t disagree that there realistically needs to be something done. Information is becoming predominately digital and law enforcement does have a job to do which at times is hindered by the ubiquity of strongly encrypted devices.

However, the calls to implement a form of backdoor into existing crypto technology will only weaken the foundation of strong and secure Internet communications and transactions. The very communications and transactions that underpin the function of a lot of contemporary society.

The creation of backdoors or ‘master keys’ will create a weakness that will need to be strongly controlled and highly protected. Any compromise of this mechanism would have the impact of potentially destroying the confidentiality of all data/devices/systems protected by that crypto system. As we have seen over recent years, the ability of organisations or governments to protect their most valuable information is never assured. All this approach would do is create a crown jewel target that would the sort after prize of cyber criminals and foreign intelligence services alike (if only we had some way to securely encrypt such a target…. oh wait!).

This current article raises some details trying to support its argument. The FBI was unable to gain access to the content of 7,775 devices in fiscal year 2017. This certainly represents an element of frustration I would suspect for investigators who are pursuing their cases. The article does fail to address a couple of points however.

Of those 7,775 devices that were inaccessible and their associated investigations, how many of those investigations failed due to that lack of ability to access the data. Namely, how many criminals avoided conviction because of the encryption on the devices? A device is an element of an investigation and I would suspect in very few cases the only avenue of approach to the investigation. While access to the data is likely critical in a number of those cases, the raw figure being presented as a ‘public safety issue’ is not really a valid argument.

Ever since encryption was first used by nation states for communications there has been attempts to break the various codes that have been used. Where it hasn’t been possible to read the information being transmitted there have been other methods developed to gain as much insight as possible. The analysis of signals transmission itself can elicit useful information. Basically, the presence of communication and the recipient can also be used to further the cause of an investigation without necessarily knowing the content of the message.

Moves by law enforcement and governments to state their case for this in an emotive and urgent nature will not lead to a rational discussion of viable solutions. There is risk here that in their pursuit of capability a great damage may be done to the underlying confidentiality mechanism that the world relies heavily on.

Certainly, I would agree there is a need for this capability, but it is not solely a technical discussion. It requires equal parts engagement of stakeholders involving strong legislative protections as well as support from relevant technology stakeholders.

There are solutions to this problem. And I do hope that we reach the one that is right for all.