The OUYA In-App Purchasing API Is Crazy Broken

If you don’t know about the OUYA game console, you’re one of the lucky ones. The makers of the Android-powered box have promised to “crack open the last closed platform”—video game consoles—but they’ve had no end of problems. Its developers haven’t responded to requests to comply with their software licensing agreements, its controllers are of insanely poor quality, and The Verge gave it one of the lowest review scores in the history of its magazine.

What I want to highlight here: the naiveté of the people who wrote their In-App Purchasing API. Here’s what it says about decrypting receipt purchases:

The receipt decryption happens inside the application to help prevent hacking. By moving the decryption into each application there is no “one piece of code” a hacker can attack to break encryption for all applications. In the future, we will encourage developers to avoid using the decryptReceiptResponse method. They will need to move the method into their application, and perturb what it does slightly (changing for-loops to while-loops, and so forth) to help make things even more secure.

They’ve got to be kidding. I, too, want the encryption of my purchase orders subject to the whims of compiler bytecode optimizations. Do yourself a favor and check out the page yourself. Notice anything else funny about it? I’ll give you a minute.

As the Verge points out,

there are often no confirmation boxes or checks against you spending thousands of dollars. Oh, you hit Upgrade because it’s right next to Play and the controller’s laggy? Perfect. Thanks for your money.

The system doesn’t enforce any kind of confirmation system for in-app purchases. As the developer of an OUYA game, you can make any number of purchases you’d like on the behalf of your customers, and do so in such a way that they have no idea that it’s happening.

Holy hell.

Shrinking Universe

Last year I made a New Year’s Resolution. I needed to do something about my very real addiction to video games. I went through my hard drive and wiped it clean. My collections of NES, SNES, and N64 ROMs, my stack of virtual disks for the Tandy TRS-80, my Boot Camp partition with Steam installed. Everything had to go.

I was mostly successful. I kept some LucasArts and Sierra adventures around for research. I installed a few classics I missed a lot. But, I allowed myself one indulgence at a time. If I wanted to enjoy some game or other, that’s all I allowed myself, until some time at which I could say I ‘finished’ it, then I had to put it down.

This worked, to a reasonable extent. Since moving to Germany I’ve loosened up a bit further, and I’ve allowed myself to play things that don’t require too much thought, that can be picked up and put down easily: the peerless Hotline Miami, some of the latest work from the Doom community, the excellent Don’t Starve.

When I was younger I would put hundreds of hours into a game without a second thought. I have no doubt I’ve put a thousand hours or more into Morrowind, for example, and probably five-hundred hours into all of the GTA series. That kind of enormous, open world beckoned me.

But I’ve noticed something. As I get older, as my New Year’s Resolution constrains my choices, I find myself gravitating towards a certain kind of game. I play Minecraft less. I play Don’t Starve more. I play Skyrim less. I play Fallout more. So what’s the difference?

Minecraft doesn’t end. Don’t Starve does. Skyrim doesn’t end. Fallout does.

We have the ability to create and hide in worlds of enormous—sometimes effectively infinite—size. All to ourselves. Like mischievous little gods and goddesses.

There is this tiny point of sadness whenever I come upon some digital village, and make my way around it, I just feel insanely lonely and cut off from the rest of the world.

You’ll never meet another soul in this place, unless, of course, you’re playing SA-MP.

We have one shared experience, the world around us. The closest to reality that we can perceive, the signals our senses intercept. There’s a tradeoff. You can have the entirety of a digital world to yourself, and have dominium over every voxel in it, but it’s lonely and empty. And then there’s the world we share, but everyone has a vested interest, and everyone is scrambling over everyone else to get to the top, and there’s limited resources, and it’s not a game.

Sure, there’s shared digital experience. Minecraft servers. World of Warcraft. These things are important because up to a certain point, they can be made indistinguishable from the real world. Our senses can be tricked into immersing into them. As I get older, I find this isn’t a trick I enjoy playing on myself.

I’m not quite sure how to wrap this up. I don’t know how to reconcile this conflict: having played video games my entire life, and wanting less and less to find myself in them. Having spent countless hours in front of the screen with friends playing San Andreas, Mario Kart, even audience-participatory Fallout. Now, making my way across the globe, with the living, visceral cast of characters, sights and sounds, around me.

I want modern games that I can put down, and still appreciate. I want multiplayer games that allow me to enjoy time with friends, and then give me my time back when we’re done.

The Next Dinosaurs

I grew up alongside three technological cycles:

  • the cycle of the desktop computer, starting with the Tandy TRS-80 for me,
  • the cycle of the cell phone and smartphone, and
  • the cycle of the Internet.

That’s three enormous technological leaps in the span of twenty or so years. Children today are growing up in a world where all three are simply accepted as normal.

If that doesn’t make you feel old, keep reading.

Despite what it seems, the leaps in technology seem to be reasonably spaced out. I’m taking arbitrary endpoints here, but if you look at

  • one of the first mainframes, the IBM 701, introduced in 1952
  • one of the first desktop PCs, the HP 9800, introduced in 1972
  • and one of the first smartphones, the IBM Simon, introduced in 1994,

these advances seem to occur every twenty years or so. Every twenty years, software developers need to shed their prejudices, and evolve, otherwise they get swept away. There’s something to be said for the one COBOL programmer in 2013, who is paid handsomely for the privilege of keeping the aging HR/billing/air-traffic-control machine alive, but these are anomalies.

We had the mainframes. Then the thin networked clients, the teletype terminals. Then the desktop PCs. Now we’re in this hybrid area. Some companies are betting on the web, moving standards forward, creating immersive web applications, polishing the user experience. Some companies are betting on mobile. They’re both thin clients of a sort. They both sit at the tail end of a network node. They’re both useless without a network connection.

The platform we develop on, the PC, is less and less part of the current technological cycle. I’m a child of the PC era. It’s where I develop, and where I see the fruits of my labor. But it’s starting to make less and less sense.

We’re about twenty years after the first smartphone.

The web as platform is facilitated by the web as development platform.

The mobile as platform is facilitated by the mobile as development platform.

Microsoft ditched us for Windows 8. Apple ditched us for iOS. We have dedicated gaming machines, dedicated media machines. The average person doesn’t need the incredible power a desktop offers. The average person can survive with an iPad and an iPhone.

It doesn’t make sense to use these big, clunky, archaic machines to write software. It divorces us from the product.

Apple doesn’t allow apps that download or run executable code, but that hasn’t stopped a small industry of iPad-based IDEs from cropping up. How many iPad apps are being written on an iPad today? How many will be written on an iPad next year? The year after that? I have a feeling that number is going to start going up and up.

You don’t need to store your code on your desktop. You can keep your source control in the cloud. You don’t even need the power of a desktop to compile your app. You can build in the cloud. So why are we still writing web apps and mobile apps on desktops?

Are we just not paying attention? Are we going to be the next dinosaurs?

prev Page 6 / 8 next