Lured into a sense of false security, mobile developers are making security mistakes that we have seen before, BT’s Konstantinos Karagiannis says, looking specifically at the same types of weak input validation and excessive trust mistakes.
In the realms of arts and leisure, everything makes a comeback. Bands take on retro sounds, movies get remade, and forgotten fashions cyclically find their way into shopping malls. Yet no one expects to say “what’s old is new” in the realm of technology.
Technology may be constantly evolving, but old development mistakes can plague even the most cutting edge applications or devices.
There will always be someone who can find a way to use a gadget or application in a way the developer never intended. Needing to do so is the hacker spirit defined; looking for certain basic design mistakes, the core hacker skill.
Secure coding and the cardinal sin of weak input validation
Some of the basic coding mistakes we’ve seen in the past continue to creep up in apps developed for Fortune 100 companies. Otherwise, you wouldn’t hear of cross-site scripting (XSS) or SQL injection or session hijacking. But secure coding has come a long way and a good number of web app devs “get it” now.
Yet, as we in the Ethical Hacking Center of Excellence (EHCOE) are finding, mobile platforms are “like, totally retro.” Lured into a sense of false security, mobile developers are making the same types of weak input validation and excessive trust mistakes as the early days.
Consider the cardinal sin of weak input validation. You can never trust user input.
Before XSS and SQL injection became popular hacks, apps were being owned via a simpler validation attack: parameter tampering. Say an app would request a user’s info with a parameter such as userid=bob. An attacker would change this parameter to userid=jim and get Jim’s info instead. Sometimes even a wildcard character like an asterisk (*)could be used, returning everyone’s information at once.
The earliest such attacks were possible because developers thought that only browsers could interact with an app. As developers learned, hackers always find a way. For example, a local web proxy allows attackers to intercept data streams sent by a mobile phone or device, whether it’s transmitting over Wi-Fi, 3G, 4G, or likely anything that comes next.
This kind of parameter tampering was possible because of a form of excessive trust. Logging in with a user ID and password is a good start, but the app has to then handle the session to ensure only the logged in user has access to data thereafter. With a lot of trial and error, sane ways of handling session via strong cookies and changing token parameters came into widespread use in the 2000s.
Then user integrity took a dive when Web 2.0 apps were introduced.
While the main functions of most of these types of interactive apps would be secure, the Web 2.0 piece of the app, say Flex or AJAX, would make dangerous calls in the background, often to unhardened extra servers. Some of these calls would request personal information without any authentication!
Much like the example of changing userid=bob to userid=jim, Web 2.0 apps sometimes give information to anyone who guesses at a parameter and makes a properly formatted request. The rationale was that a user couldn’t see this traffic, so why secure it. Of course, that kind of thinking was wrong in the early days of the web, then in Web 2.0, and now in mobile apps.
We have seen some severe examples of weak input validation and excessive trust in mobile applications. Much like Web 2.0 apps, mobile apps often make dangerous, insecure calls to servers.
As a result of our ability to proxy traffic and see all calls that mobile apps make, we have saved clients major embarrassment or financial loss by finding flaws before the bad guys did. I’m not just being dramatic.
The wild wild west vibe of mobile hacking
Here are two dangerous examples of what we’ve seen:
- Imagine a gift card app that lets an attacker generate hundreds of thousands of dollars in gift card codes for free. You better believe finding this led to one of those emergency conference calls. This was excessive trust at its worst, with an “invisible” server that happily sent codes to whoever requested them. All you needed to do was see how the app requested a valid card, and then tamper with parameters to get other codes.
- Consider the disaster awaiting users of a loyalty program if the mobile app lets attackers get the complete personal information of all other users. All an attacker needed to do was log in with his or her account, then intercept subsequent parameters the app was sending. Guessing at other loyalty numbers (trivial) would return sensitive information of associated accounts. And it gets worse … the returned information could be used to take over these other accounts by resetting passwords, etc.
There is no mystical protection provided by mobile platforms. The servers that these apps reach out to are accessible by any Internet-capable device, including a hacker’s tool-laden laptop. And the apps themselves live on devices riddled with flaws—no system-wide encryption, default passwords for root-level accounts—that only make things worse.
For now, we’re enjoying the wild wild west vibe of mobile hacking. Why would we do this job if there were no opportunities for eureka moments and ego-boosting exploits? Still, we can’t hack every app in the world before it goes live.
Developers need to consider that mobile apps are no different than web apps in terms of secure coding practices. They have to make sure the apps deny all but what’s required to function, and mistrust the infrastructure on which they’re running.
Retro can be fun, but we really don’t need the amount of vulnerabilities that sprang up in the 90s any more than we need a flannel-laden grunge rock rebirth.
The author, Konstantinos Karagiannis, is Principal Consultant, Ethical Hacking, at BT Global Services.