Thursday, 26 May 2011

Chrome app security model is broken

I’m worried. I’m worried for a lot of users who’ve installed Chrome Apps. I was idly browsing the Apps in the Chrome web store the other day and came across the popular Super Mario 2 app on the front page (over 14k users). I have to admit, I actually installed the app (extension) myself, so let me explain the user (and security) experience.



I saw the big splash screen for the flash game and thought I’d give it a try. There is a big install button (see picture). Installation is pretty instantaneous. As I looked at the screen, I saw the box to the bottom right. “This extension can access: Your data on all websites, Your bookmarks, Your browsing history”. I think I can legitimately give my mental response as “WTF!?! This is a game! What does it need access to all this for?”. I then immediately took steps to remove the app.

Removing the app

So, disabling and removing the app was not as straightforward as you would think and this was also quite annoying. The Chrome web store also includes ‘extensions’ to Chrome (the extensions gallery). These are not easily visible to a user as to where they’re installed. In fact, you have to go to the settings->tools->extensions to do anything about it. The normal installed Chrome apps are listed when you open a new tab (ctrl-t), but this is not the case for extensions.

Permissions by default

Having removed the app, I set about investigating precisely what I had exposed this app to and the implications. Under the “Learn more” link, I found a full description of permissions that could be allowed by an application. I had to cross-reference these back to what the app / extension had asked for. The picture below shows the permissions (expanded) for the Super Mario 2 game.



I don’t want to go into great detail about the ins and outs of what some people would term “informed consent” or “notified consent”, but the bottom line is that a hell of a lot is being given away with very little responsibility on Google’s part. After all, to the average user, the Chrome ‘chrome’ is an implicit guarantor of trust. A Google app store, the apps must have been checked out by Google, right?

I also won’t go into the top line “All data on your computer…” which installs an NPAPI plug-in which is essentially gameover in terms of access to your computer. To be fair to Google, their developer guidelines (below) state that any applications using this permission will be manually checked by Google. However, there is an implication there that the other applications and extensions aren’t.


So let’s concentrate on the permissions that are requested by the game. 

  1. The first one, ‘Your bookmarks’ allows not only reading, but modification and additions to your bookmarks. Want setting up for something anyone? A legitimate link to your bank going to a phishing site?
  2. The second item, ‘Your browsing history’ for most people is going to reveal a lot. Very quickly, a motivated attacker is going to know where you live from your searches on google maps, illnesses you’re suffering and so on. There is a note here that this permission request is ‘often a by-product of an item needing to opening new tabs or windows’. Most engineers would call this, frankly, a half-arsed effort.
  3. The third item, ‘Your data on all websites’ seems to give permission for the application to access anything that I’m accessing. Then, the big yellow caution triangle: ‘Besides seeing all your pages, this item could use your credentials (cookies) to request your data from websites’. Woah. Run that one by me again? That’s a pretty big one. So, basically your attacker is home and dry. Lots of different types of attack exist to intercept cookies which will automatically authenticate a user to a website. This has been demonstrated against high-profile sites such as twitter and facebook by using tools such as firesheep. Given that it is a major threat vector, surely Google would have properly considered this in their permissioning and application acceptance model?


It’s pretty obvious how potentially bad the Mario extension could be, particularly when this is supposed to be just a flash game. What really irks me though is the ‘permissions by default’ installation. You click one button and it’s there, almost immediately with no prompt. Now, I’m not the greatest fan of prompts, but there are times when prompts are appropriate and install time is actually one of them. It gives me the chance to review what I’ve selected and make a decision, especially if I hadn't spotted that information on a busy and cluttered webpage. I hear you all telling me that no-one reviews permissions statements in Android apps, so why would they do it here and yes, I partially agree. Human behaviour is such that if there is a hurdle in front of us and the motivation to go after the fantastic 'dancing pigs' application is sufficiently high, we'll jump over the hurdle at any cost. There is also a danger that developers will go down the route they have with facebook applications - users accept all the permissions or you don't get dancing pigs. Users will more than likely choose dancing pigs (see here for more info on dancing pigs).

The beauty of a well designed policy framework

So we're not in an ideal world and everyone knows that. I firmly believe that there is a role for arbitration. Users are not security experts and are unlikely to make sensible decisions when faced with a list of technical functionality. However, the user must be firmly in control of the ultimate decision of what goes on their machine. If users could have a little security angel on their shoulder to advise them what to do next, that would give them much more peace of mind. This is where configurable policy frameworks come in. A fair bit of work has gone on in this area in the mobile industry through OMTP's BONDI (now merged with JIL to become WAC) and also in the W3C (and sadly just stopped in the Device APIs and Policy working group). The EU webinos project is also looking at a policy framework. The policy framework acts in its basic sense as a sort of firewall. It can be configured to blacklist or whitelist URIs to protect the user from maliciousness, or it can go to a greater level of detail and block access to specific functionality. In combination with well-designed APIs it can act in a better way than a firewall - rather than just blocking access it gives a response to the developer that the policy framework prevented access to the function (allowing the application to gracefully fail rather than just hang). Third party providers that the user trusts (such as child protection charities, anti-virus vendors and so on) could provide policy to the user which is tailored to their needs. 'Never allow my location to be released', 'only allow googlemaps to see my location', 'only allow a list of companies selected by 'Which?' to use tracking cookies' - these are automated policy rules which are more realistic and easy for users to understand and which actually assist and advance user security.


Lessons for Google

Takedown - Looking at some of the comments from users on the Super Mario game, it is pretty clear people aren't happy, with people mentioning the word virus, scam etc. The game has been up there since April - at the end of May, why haven't Google done anything about it? The game doesn't seem to be official, so it is highly likely to be in breach of Nintendo's copyright. Again, why is this allowed in the Chrome web store? Is there any policing at all of the web store? Do Google respond to user reports of potentially malicious applications in a timely manner?

Permissions and Access - You should not have to open up permissions to your entire browsing history for an application to open a new tab! This is really, really bad security and privacy design.

Given what is happening with the evident permissiveness of Android and the Chrome web store, Google would do well to sit up and start looking some better solutions otherwise they could be staring regulation in the face.

Bootnote

I mentioned this to F-Secure’s Mikko Hypponen (@mikkohypponen) on Twitter and there were some good responses from his followers. @ArdaXi quite fairly pointed out that just to open a new window, a developer needed the to allow Chrome permission to access ‘Your browsing history’ (as discussed above). @JakeLSlater made the point that "google seem to be suggesting content not their responsibility, surely if hosted in CWS it has to be?" - I'm inclined to agree, they have at least some degree of responsibility if they are promoting it to users.

I notice that Google seem to have removed the offending application from the web store too. I think this followed MSNBC's great article 'Super Mario' runs amok in Chrome Web app store after they picked up on my link through Mikko. I think it may be fair to say that the extension has been judged malicious.





Tuesday, 17 May 2011

M2M security is important but more importantly, how do we make money?

That's the story of last night's Mobile Monday in London. As with all marketing catchphrases, the panel struggled to properly define machine-to-machine (M2M), with one describing it as more machine-to-network. Accenture's David Wood (@dw2) presented quite a pragmatic view stating that there are likely to be multiple different eco-systems of machines talking to other machines in specific industries. He pointed out that big incumbents would try to control the technology to the extent that the revenue continues heading their way which is something that would hinder development as it did with Smart Phones in the past. The prediction of a Smart Barbie drew some sniggers in the audience but it does seem that the toy industry are quite on the ball so they will almost definitely exploit this kind of technology.

A long list of applications from healthcare through to construction and industrial controls were brought forward by the presenters with Ericsson's Tor Bjorn Minde (@ericssonlabs) predicting 50 billion devices by 2020. This is an incredible number but is probably realistic. The number of transducers around far exceeds that now. In my view what we are more likely to see is similar to existing Distributed Control Systems (DCS) which have been in industry for years (I was working with one back in 1996). The transducers are connected back to one host system for the plant in a private network. Looking into this today, I see that industrial control systems already use wireless networks, so we're already into a healthy M2M world, it just isn't branded as such by the marketing people. Let's also not forget that the WiFi connected fridge and vacuum cleaner already exist, they're just not mainstream yet. It will probably take NFC tags on every product in your fridge to make that a hassle-free, useful product that people want (automatic ordering, recipe creator etc.). I guess that'll mean a new fridge in every home...

Adrian and Janet Quantock [CC-BY-SA-2.0 (www.creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons

Dan Warren from the GSMA (@tmgb) talked about embedded SIM and how to prevent SIM cards being stolen from smart meters and traffic lights. He also raised an important point that "you don't need to drive test a fridge" - mobility isn't that important for a lot of M2M applications. William Webb from Neul suggested that using the white space spectrum in the UHF space (which is bigger than the WiFi band) could be an opportunity for low-power devices talking to each other.

Camille Mendler (@cmendler) mentioned that people wanted to know "is it safe?". There was no real discussion of this but one of the panelists privately told me afterwards that they didn't want to go anywhere near safety critical software for applications such as automotive. As I've previously discussed, there needs to be some real discussion on this in the mobile phone industry as it is a relatively new area for handset manufacturers and operators. Going back to DCS systems, being able to control a valve is co-dependent on the status of other transducers in the system such as flow sensors, hardware interlocks and non-return valves. This is absolutely critical because human error can often cause huge safety issues. In a DRAM fab, you don't want to open a silane valve if you've not purged it with nitrogen first (Silane is pyrophoric and this specific example has killed people in explosions in fabs in the past). Now think about your own home - what would happen if you remotely turned the oven onto full but the gas didn't light? Consumer goods are certified for safety (e.g. CE marking) but there will need to be new certifications in place for remote control, including that the embedded software is fit for purpose.

The big question on everyone's lips was "who is going to make money?" and the answer didn't seem forthcoming. On twitter, there was more talk of Arduino, which I blogged about the other day in relation to Android@Home . After my question about whether Google could be in a position to clean up here, the panel dismissed this a little bit stating that this was what everyone used to say about Microsoft. It may have been that the panel hadn't seen the announcements at Google I/O but I do see this as a real possibility.

All the panelists mentioned security as being paramount but didn't elaborate on it with David Wood saying that "security issues will bite us". I think that hits the nail on the head but the audience nodding in agreement seemed to me like lemmings heading forward towards the cliff "because there's money to be made!".

One attendee didn't like the idea of being tracked around the supermarket and questioned privacy. Again, the concerned faces and "yes that is a challenge" response. "Yes but think about the nectar points!" I hear them cry.

So in summary, I think the really big issues are safety and security and there could be some serious money to be made out of looking at those issues - existing M2M installations are already under attack. A lot of people seem to be glossing over those issues in favour of the money to be made. There'll be lots of sensors out there reporting to create the 'internet of things' that developers crave, but the interesting stuff should and will be firewalled and secured and ultimately heavily tested and regulated.

Wednesday, 11 May 2011

A video introduction to webinos

In my last post, I briefly mentioned webinos.

I just spotted they have an introduction video to webinos which explains some of the automotive aspects, following the adventures of Bob, Susie and 'Wictor' booking their skiing holiday. Check it out:

Android@Home - Now I'll hack your house (part 2)

So in part one I introduced some of the reasons why home control hasn't been a mass-market success, here I'll discuss some of the potential uses and then cover some security points.

Uses of Home Control


To get your minds in gear, I've listed out some possible (and existing uses of home control). The idea of Android@Home will be to bring all this together. I'm guessing people are going to need to buy more network switches in their homes!

  • Curtain and window blind control
  • Electrical outlet control (timers and on/off)
  • TV control
  • Lighting control
  • Home CCTV
  • Burglar alarm
  • Motion sensors
  • Child monitoring
  • Garden lights
  • Pond waterfall and fountain pumps
  • Bath level monitors
  • Home cinemas
  • Thermostats and heating
  • Smart meters
  • White goods monitoring and control (fridges, cookers, washing machines etc.)
  • Doorbells
By Google open-sourcing the platform, this creates a defacto standard for people to kick-start the home control industry. If you look a bit deeper, the technology is a combination of a wireless protocol from Google and a hardware Accessory Developer Kit based on Arduino which means you can access USB devices too. Their software project is on Google Code . Arduino also have a ‘lilypad’ range  for wearable applications. This could even further extend the applications for Android@Home. There are some interesting Arduino projects around, including a combination door lock. I can see how Near Field Communication (NFC), touch tech fits into all of this, but not so much machine-to-machine (M2M) technology, but in theory it could easily be interfaced.


The real cleverness in all of this will be in mashing up the data and applications – mood lighting for music, intelligent context based decision making – e.g. I am the only person in the house so switch to home monitor mode when I leave. I believe this will fly because home control has been quite a popular geek project with various methods tried by people such as PSP home controllers . 


Security

Clearly, this technology is a hugely attractive target to hackers, good and bad. Being able to find out what your neighbours are up to is going to mean there is a generic consumer market for attacking these systems. This is bad news for your home network.

"you are relying on the developer to get it right"

Existing problems with Android Market come down to malicious software that has slipped through and plain old bad coding from developers. With home control solutions, you are relying on the developer to get it right. Not only for security, but also for safety. This is an untested area so is probably not completely covered by regulation but I would certainly be worried about my oven accidentally over-cooking something by 12 hours. Many of the goods that are produced with wireless control are going to have their own local safety interlocks but an intentional malicious attack or exploit to vulnerabilities with particular manufacturers could cause chaos. Suddenly your house has become part of critical national infrastructure! Imagine an attacker turning everything on in every house in the UK that was connected? It could easily bring down the national grid. The existence of a botnet of houses could be used to blackmail governments. Wireless, device and perimeter security are the main issues that need to be considered. A lot of this technology is built around the web, which in my view is simply not secure, nor web-runtimes robust enough for these kinds of critical applications.

At a much lower level, if burglars could remotely access your home control system, they could shut off all your security and lights enabling a much easier burglary. Conversely, it can be argued that the user is in much more control, so if their house is burgled in the middle of the day (the majority are), the user can be alerted immediately. This in itself may not be enough to prevent the burglary, but the simple fact that this function exists increases the chance of the burglar being caught. The deterrent that this creates could actually reduce burglary.

http://upload.wikimedia.org/wikipedia/commons/thumb/2/2c/Safety_glass_vandalised_20050526_062_part.jpg/800px-Safety_glass_vandalised_20050526_062_part.jpg

One other low level crime which could increase is handset theft. More people lose phones than have them stolen, but by putting home control onto the phone (perhaps it's an NFC lock to the house too), you are making the user much more of a target.

I could go on and talk about other things such as further loss of privacy - think about the mountain of data Google will be sat on about your habits. There are some other projects which are studying this area - the internet of things. The EU-funded webinos project is also looking at the dangers of connecting real, physical things to the internet and how that can be secured, it'll be an interesting one to watch. Wait for Google to make their next move in this space - automotive.

Android@Home - Now I'll hack your house (part 1)

Very exciting news from Google I/O in San Francisco. Android@Home has been announced, a logical move and one which I would wager will be highly successful. With Google TV set to emerge in homes this year and a plan by Google to merge their phone, tablet and Google TV code into one build codenamed "Ice Cream Sandwich" at the end of the year, the company seem well positioned to take on home control. Google TV offers users the ability to control their TV from their Android phone amongst plenty of other features. This basic feature, to use your phone as a remote control for the TV has been something that users have been crying out for for years, with nobody paying any real attention to it. I do remember a great program called Nevo on the iPAQ on which you could control masses of IR equipment. I gained much amusement from changing the TV in the pub and works canteen to the confusion of the staff there.


Cost, Complexity and Fragmentation




Yet home control has never really caught-on. I put this down to a number of factors (which the mobile industry is well used to hearing): fragmentation, cost and complexity. The three factors have combined so far to prevent the market maturing in any sensible way. Yes, there are home control systems out there, but they are all pretty much proprietary. I've been considering whether to do some home control for years but the components are over-priced and I can't interface with them with my own software. Take the example of a remote controlled socket kit from the UK's B&Q or the control for remote lighting . Everything needs its own remote control. We want to use our mobiles! No doubt this is true of the designers and manufacturers of these products too, which is why I think Android@Home is going to be a roaring success. Others such as Bose may continue to sell the whole integrated system, continuing to target the niche high-end market but ultimately market forces will probably force them to ditch their proprietary system.


Setting up IP cameras in your home now also involves putting some software on your PC. A lot of users have switched to much better open source solutions such as iSpy just because of the poor quality and complexity of the setup of the proprietary (or badged) PC software.


So, in summary, as a normal person I don't want to pay loads of money, I don't want it to be difficult to setup and I want to run everything from the same software on my mobile phone.


In part 2, I will discuss some of the uses and why security is critical.







Monday, 9 May 2011

UMA – Unsafe Mobile Access?



I’ve been following Mobile Monday’s London chapter for a few years now and I know a few of the guys there, but I’ve never been able to get down to one of their events. I finally made it down to the April 2010 demo night and was suitably impressed by the number of attendees and the quality of the short 3 minute lightning presentations. I thought that I’d put a security spin on what I witnessed but ended up writing this blog on one particular presentation about ‘Smart Wi-Fi’.

Mark Powell from kineto.com talked about offloading data to wifi from the mobile. Increases in data traffic have caused some big headaches for operators, so this is clearly an attractive proposition for them. It is pre-loaded on some devices, partly because there are some custom APIs involved. It uses 3GPP GAN (Generic Access Network) as the underlying technology to get access to the mobile network and is also known under UMA (Unlicensed Mobile Access). Kind of like a ‘soft’ femtocell (I might even go as far as to say a potential femtocell killer). This is being marketed by T-Mobile as ‘Wi-Fi calling’ and Orange as ‘signal boost’. You’re going to get charged for your normal call on top of your broadband fee, but in general the benefits of having a better signal in the house is probably going to be quite attractive to people and may become a standard offering in the future. Kineto also explained that it helps avoid international roaming because once going Wi-Fi it will be just as if you’re in your home country.

As a paranoid security person, I always get a bit concerned when operators rush to a new technology to solve their problems (in this case network load). Converged technologies bring completely new threat scenarios which can re-enable old attacks with new vectors for achieving them. From a security point of view – there are some pretty obvious initial questions that spring to mind: 

  • What if you’re connected to a rogue router? Is any of my data going to be compromised?
  • Is a man-in-the-middle (MITM) attack possible on the access point?
  • Can fraud take place?


I searched around and found an interesting whitepaper from Motorola, produced back in 2006 which describes some high level threat scenarios: UMA Security – Beyond Technology . I also found Martin Eriksson’s thesis Security in Unlicensed Mobile Access . This states that the IMSI (International Mobile Subscriber Identity) is not secured well enough, leading to exposure of the subscriber who is attached to the router and therefore their physical location. Note that the thesis was written in 2005 and very much plays this issue down – in 2011 most readers would take a different view on this privacy breach. Issues with authentication and the potential for a MITM attack via the router allowing (fraudulent) free calls for other users of the access point all also seem to be areas of concern as the router would be open to data sniffing (particularly if it is a rogue access point in the first place). The problem here lies in the fact that the user is connecting to a less-trusted component than the normal mobile network, leaving them open to all sorts of potential attacks and manipulation.

Putting expensive hardware security into routers is not something I’ve seen and is difficult to protect – the problems with mobile device security often stem from the fact that you’re putting the device in the hands of your attacker to tamper and play with. There is already a healthy community for router hacking and modification around too such as DD-WRT .

UMA applications on phones need to use shared secrets which are stored on the UICC. It would be interesting to analyse how well protected this data is on the device and whether it would be possible to snatch that data or even whether other attacks could be created on the UICC.

Although some of the issues here may have been addressed by the mobile industry, it seems that UMA could be a bit of a risk for users. (I’d welcome any comments or updates by the way from those in the know). The technology is probably safe at the moment as it is in its infancy and hasn’t crossed the radar of most of the hacking community. However, I for one, will be steering clear for now.