Archive

Tag Archives: column

gattaca

“Power tends to corrupt,” said Lord Acton, “and absolute power corrupts absolutely. Great men are almost always bad men.”

The sexism needs updating but the sentiment remains true. That’s been all too obvious this week, during which the powers that be did their damnedest to protect their once-secret surveillance programs…while the NSA responded to Freedom Of Information Act requests with the claim “There’s no central method to search [internal NSA emails] at this time.”

@rezendi In related news, NSA says it's never come across the term “dogfooding” in any of its data trawling, & doesn't know its definition.-
Lun Esex (@LunaticSX) July 24, 2013

The black-comedy message is clear: surveillance is something that the powerful do to the powerless, in their own perfect secrecy. Two-way transparency is but a pipe dream in the minds of civil libertarians. Which puts me in mind of science-fiction guru Charles Stross’s recent blog post A Bad Dream:

Is the United Kingdom a one party state? […] I’m nursing a pet theory. Which is that there are actually four main political parties in Westminster: the Conservatives, Labour, the Liberal Democrats, and the Ruling Party. The Ruling Party is a meta-party…it always wins every election, because whichever party wins is led by members of the Ruling Party, who have more in common with each other than with the back bench dinosaurs who form the rump of their notional party […] Any attempt at organizing a transfer of power that does not usher in a new group of Ruling Party faces risks being denounced as Terrorism.

Of course in America this is old news. The one thing that the Tea Party and the Occupy movement have in common is their desire to throw the Ruling Party bums out of Washington. It’s an accepted axiom in American politics that anyone who has been in Washington too long is suspect and probably corrupt. (More than 75% of Americans think their political parties are corrupt.)

The wave of hope that drove Obama into office was fuelled in part by the belief that he wasn’t a member of the Ruling Party. Well, even if he wasn’t then, he sure is now. That’s what usually happens to successful politicians:

The GOP establishment: Obama is a tyrant, except in the areas where we want to give him sweeping unilateral power to exercise in secret.-
Conor Friedersdorf (@conor64) July 26, 2013

Nancy Pelosi in 2005: Patriot Act “a massive invasion of privacy” 1.usa.gov/1bOHdyZ Today, she voted to let that invasion continue.-
Trevor Timm (@trevortimm) July 24, 2013

Similarly, the recent spate of antigovernment street protests in Turkey, Brazil, etc., are-arguably-protests against the various international incarnations of the Ruling Party. As Slavoj i ek writes in the London Review of Books:

What we first took as a failure fully to apply a noble principle (democratic freedom) is in fact a failure inherent in the principle itself. This realisation – that failure may be inherent in the principle we’re fighting for – is a big step in a political education. Representatives of the ruling ideology roll out their entire arsenal to prevent us from reaching this radical conclusion.

i ek’s a Marxist, and I’m a staunch capitalist, but even I have to admit that he may be on to something there. it’s possible that multiparty democracy suffers from an inherent and fundamental flaw: the eventual installation of an entrenched, parasitical Ruling Party.

So, of course, as a techie who instinctively thinks in terms of hacking and fixing systems, I immediately find myself wondering: is there a technical fix? Can better technology save us from the Ruling Parties, or at least alleviate some of our governments’ more glaring flaws? Or will technology further entrench and empower them?

These days it’s hard for Silicon Valley to look at Washington with anything other than dismay trending towards horror, along with a powerful sense of “there has to be a better way.” I expect that’s why people have seriously called for Google to buy Detroit. I suspect that’s what Larry Page had in mind, at least in part, when he mused aloud about the desirability of a mad science island untrammeled by antiquated laws and politics, where we could experiment with new and better systems:

We’re changing quickly, but some of our institutions, like some laws, aren’t changing with that. The laws [about technology] can’t be right if it’s 50 years old – that’s before the Internet. Maybe more of us need to go into other areas to help them improve and understand technology.

Google is, after all, the apotheosis of the Valley; a company that muses about offering eternal youth to its employees somewhere down the road, a company that oozes scientific method. Doesn’t that sound a whole lot better than the Ruling Party? Doesn’t it seem like the best thing we could do is import the Google Way to Washington, and turn our government into a genuine technocracy?

Sorry. No. Silicon Valley thinks of itself as built on merit, innovation, iteration, and rational thought, and to some extent it is, but its worldview can be even more blinkered and bubble-bound than that of the Ruling Party. Technology does not solve all of the world’s problems, and it’s dangerous hubris to think that it might. Rational thought is a flawed tool in a world full of irrational people. And most of all, power corrupts; anyone who replaces the Ruling Party will probably eventually become a member.

But on the other hand, avoiding politics and/or pretending that it has nothing to do with us is no longer an option for the tech industry. Edward Snowden has shown us that much. We have become too important and too powerful. As I wrote here almost three years ago:

You probably don’t want to read about political idiocy here, and I can’t blame you. But it may be time for the tech industry to start paying much more attention to the political world, because as Wikileaks vividly illustrates, these days almost every political issue has tech aspects-and hence, down the road, tech repercussions.

Can’t help but think I wasn’t wrong. But that doesn’t mean the tech industry should be trying to directly shape what happens in Washington and Westminster. We provide tools; we don’t dig trenches. That’s not what we’re good at. (Witness FWD.us.) Instead we should collectively be trying to ensure that tomorrow’s technologies, and tomorrow’s networks, support individual authority (and privacy), rather than building centralized panopticons which increase and cement the existing hegemonies.

I realize that this all sounds simultaneously paranoid and na ve. But I believe we’re nearing a crucial point at which, depending on a myriad of separate decisions ultimately made by individual people, tomorrow’s technologies can-and will-either increase or diminish our individual and collective freedoms by a very significant degree. The direction we will take seems finely balanced, and could still go either way. So keep your fingers crossed, and your eyes wide open.

Postscript: I’ll be in Las Vegas this week to cover the Black Hat and DefCon security conferences. I’m not entirely sure yet what kind of reportage I’ll be filing, but if you’re interested in occasional sardonic tweets from Sin City, follow me on Twitter.

Apple At-Home Worker Image

Editor’s note: Ashley Verrill is a software analyst for Software Advice, as well as the Managing Editor for the Customer Service Investigator blog.

When a leaked memo broke the news earlier this year that Yahoo was ending its work-from-home program, CEO Marissa Mayer was both lauded and lambasted for the decision. Companies such as Best Buy followed suit by announcing they too would end their flexible work options, while some industry observers called the move an “epic fail.

The fact of the matter is, while not every remote worker is happier, more productive, and produces better quality work, as some have purported, telecommuting offers indisputable benefits for certain types of businesses. Apple, for example, permanently employs a massive network of remote customer support agents (dubbed At-Home Apple Advisors), saving them the huge real estate expense of a call center. At the same time, their recruiters can draw from an enormous talent pool since location isn’t a factor, and weather never prevents “advisors” from coming to work.

But running a team this way doesn’t come without challenges, chief among them being effectively training people in disparate locations. But Apple makes it work. I know this, because I recently emailed more than 40 current and former “Advisors” on LinkedIn to learn how. (Apple refused to comment. They are notoriously tight-lipped on strategy). The methods in every case were intense, sometimes sort of silly, and at other times borderline extreme. As one employee commented in a community thread, “I can honestly say the job was probably one of the most stressful I have ever had, and I used to counsel drug addicts and felons!”

For starters, according to the advisors I spoke to, Apple doesn’t tell trainees until after they’re “hired” that the four-week, 8 a.m.-5 p.m. training program is actually a testing period. The curriculum is broken into four, one-week sections that are a mix of live instruction and self-paced modules in iDesk. Then at the end of each week, everyone takes an exam. They have two chances to hit the grading benchmark (two advisors said this was 89 percent, one said it was 80 percent), before they are kicked out of the program. So immediately, workers have an impetus not only to pay attention, but to keep the job once it’s final because they worked so hard to get there.

Next, Apple uses a variety of tactics to ensure that would-be advisors are actually at their computers while training is going on. For example, trainers deliver regular prompts to each person throughout live instruction. These can be questions, requests for input, or just a cue for the trainee to click on. One former advisor I spoke to said Apple monitors mouse movements. If your mouse doesn’t move in a certain amount of time, then you’re sent a prompt. If you still don’t respond within 30 seconds, the trainer might actually call your cell phone.

In addition to these prompts, trainers can ask the class to turn on their cameras for group discussion at any moment, making it immediately clear if someone isn’t at their desk. Also, many of the test questions are worded in such a way that the trainee would only know the answer if they participated in all the past week’s activities.

All of these tactics are extremely impactful, not only for ensuring attendance, but fostering competition. Every advisor I spoke to reported never missing a session (though several mentioned other trainees dropping out, or being kicked out because of test scores). This was in part because the entire class sees when someone doesn’t respond to a prompt and when they fail a test.

In addition to ensuring attendance, Apple uses team psychology to keep workers engaged during and after the training period. Programs are taught to groups of 20-100 people who all live within 100 miles of its many “hub cities.” Often between instruction, the trainer asks the group to talk about themselves. They ask them about what they did that weekend, whether they have any pets, or even “why doesn’t everyone send the group a picture of what they’re having for lunch?” Some classes even had crazy hat day (that’s the silly part, I mentioned above).

This teamwork is also enforced by breaking the class into smaller groups for mock calls near the end of the program. One person in the sub-group fields a hypothetical customer call, while the entire team is asked to give feedback after the exercise is over. Many of the advisors reported regularly chatting on the computer, or even over the phone with classmates during and after instruction was over.

While some of these strategies were reminiscent of team-building exercises I did at summer camp, they worked. Two of the people I interviewed no longer work as an Apple advisor, but still keep in contact with a few people that were in their training group.

Finally, Apple creates buy-in from the team by enforcing company culture. The first few days of training are dedicated to describing the company’s history, the Cupertino campus culture, and what it was like working with Steve Jobs. Before training starts, each advisor is sent a care package that might have a T-shirt, plaque, mug, gift cards and other keepsakes demonstrating that they are “part of the Apple family,” as one person put it.

After training, workers begin a job that is extremely intense. One certified Apple trainer told me that managers closely scrutinize every call, and advisors are required to maintain a nearly perfect customer satisfaction score (among other metrics used to measure performance). All of this, and the advisors make between $9-$12 an hour (according to the advisors I spoke to).

Outsiders would probably argue that Apple’s training program and high work expectations aren’t really feasible for other support organizations – people would quit after the first day. I’d be inclined to agree. In the same way that customers will pay three times as much for their technology, workers will endure much more to get Apple on their resume and be “part of the family.”

So, while companies who manage teams permanently off-site can still realize the same benefits I mentioned at the outset – savings on real estate and a broader reach for recruiting – they may not be able to establish a remote team that is as effective as Apple’s. Only Apple can demand this level of intensity, because as one advisor put it, “Apple has no qualms with saying if you are not the best, you can always work somewhere else. They make that abundantly clear.”

[Image via Jeremy Jenum]

Screen Shot 2013-07-28 at 2.43.08 PM

Editor’s note: Tadhg Kelly is a veteran game designer, creator of leading game design blog What Games Are and creative director of Jawfish Games. You can follow him on Twitter here.

To the joy of many, Microsoft announced another Xbox One pivot: Rather than try to maintain a fortress of solitude, the console will support indie publishing. You’ll be able to use your console as a dev kit (traditionally dev kit licenses could be very expensive) to make and publish your games. Microsoft even promises to remove some of the category barriers that segregated indie games to a backwater page in the Xbox dashboard.

These moves can be read in two ways. The first is largely as a reaction to Sony. Sony has been flirting with the indie developer community for a while, quietly building up relationships and facilitating the publishing of a number of games such as Journey and Thomas Was Alone. As part of PS4 the company has significant plans to allow small developers to self-publish on the system, although still under a dev kit model. It promises to send free kits to developers that need them.

The second read is to consider these moves in light of wider trends. Outside of giant thousand-man studios and tiny indies, most mid-sized gaming companies are nowhere within 100 kilometers of consoles these days. There’s just no place for them in a sector that values its 20m+ unit hits, and they can’t afford to compete at that level. All of those people have shifted to mobile, tablet or social instead, where they are finding success.

The move to attract indies sits semi-uncomfortably. The console industry is used to acting like a car showroom, developing specific pieces of beautiful game content and then engaging in a large sales push toward success. Fans of consoles (including many developers) are also used to this model, and tend to think of this activity as “real games,” as well as the most economically significant activity in the industry. Much as Hollywood still thinks that box office means something, console game executives tend to be more impressed by stories involving unit sales rather than residuals.

That showroom mentality is what led Microsoft down the path of making Xbox One into a mega-hub, which nobody understood, or Sony make a very similar thrust with PlayStation 3. The pivots away from those big plays may at first glance seem like attempts to atone or to broaden out their relationships with game makers, but I tend to think otherwise. What they’re actually about is developing a few show-bikes to go alongside the show-cars.

Indies vs Independents

There are several meanings of the term “indie.” For some it simply means financially independent, able to make games and revenue and be self-sustaining. For others the term is political, expressive of points of view and meaning. This second version is far more popular in the games press because it has more of an emotional component. Indies stand for something and become heroes fighting an unspecified “man.”

It may surprise you, but in the console-ist view the political kind of indie game is more desirable because it ticks the art-game box. Art games are rarely expected to make their money back, and certainly not to become big franchises. Yet there’s a lot of value in having them. If you can have a few notables like Jonathan Blow talking up your platform, a few Phil Fishes and a few “thatgamecompanys” making signature games, then this is a great story. It aligns you with the kind of story seen in Indie Game: The Movie and at GDC. Most important is that it gets the press on side, which is hugely important in the mutually assured destruction of console platforms. Appearing to be indie is worth acres of PR.

At the same time, supporting a few such indies allows platforms to retain their essential power. While PC gaming has always reserved much more power to the developer and treated hardware makers as little more than component makers, console gaming has always worked the other way. The console is the main brand and the platform story. The games all appear on the console with the holder’s say-so. The publishing model places the console brand front and center, and the games are in support, and the market tribally responds along those lines.

Taken in that vein, the modern console industry’s understanding of allowing indies to enter into its playpen is pointed but they are not embracing an ecosystem any time soon. From the standpoint of where they’ve been, modest steps to change their model may seem like great leaps for Sony, Microsoft and Nintendo. Like TV executives who are still tentative about streaming, there’s a sense of not going too fast for fear of losing everything.

This is why Microsoft’s newfound message of developer liberation is still pretty garbled. The exact plans for how Xbox One will go indie-friendly come across as a bit hazy. They smack of a recent decision at the executive level which will need some thoughtful re-engineering time to figure out on the practical level, so don’t expect it for launch. Also how it reconciles with some other showroom features (like the heavy push on mainstream TV) is anyone’s guess.

Not to let Sony off the hook, its plans for indie liberation are similarly convoluted. Sony still wants some forms of concept approval, which – even though the company promises a speedy turnaround – still sounds every bit as ludicrous as Roku wanting concept approval for movies it streams. It should make any developer pause and think seriously about what it implies.

Yet the bigger issue is that both plans are not enough. They do not represent change real enough that indies in the first sense of the word (financially viable) would find attractive. It’s also woefully out of step with just how far games have come. Developers are far more empowered today than they have been since the days of microcomputers in the 80s and are not keen to sacrifice that freedom.

You Are Free To Do What We Tell You

It used to be imperative to placate Sony, Nintendo or Microsoft for any game to have a chance of being published. This was expensive between concept approvals, extensive technical requirements and laborious quality assurance and certification processes. But what could you do? They were the gatekeepers, it was largely a relationships business, and that was that.

Even when they moved into digital markets they were choosy, taking an active role in content selection and publishing. Games were released on schedules to give a window for sales to build and platforms were managed like topiary. Not too many games of one genre or another, just a few key ones and a heavy sense of curation. All very bonsai.

Then Apple and Facebook upended that model with something more organic and irrevocably changed how developers thought of success. Success was no longer to be like Jonathan Blow or Ubisoft. It became being like SuperCell. The console industry has never been able to fully understand the depth of that shift.

The way that developers approach making games on Facebook, iOS and Android is radically different to how things used to be when console platforms (and PCs) was all there was. They just do it, no dev kits, relationships, publishing schedules or concept approvals required. They may need to pass some curation (particularly from Apple) but those conditions tend to be far narrower in scope than anything the console industry ever imposed. Essentially don’t crash, no porn, no defamation and you’re good to go.

That new model is the one that breeds true independent game development success. The bonsai paradigm of consoles prevents developers from expanding too much, meaning that a thatgamecompany gets to make cool games but not really grow (if they want to, of course). Whereas the iOS/Android/Facebook model gives birth to Rovios and Zyngas (in happier times perhaps). When platforms get out of the way and let software be software, software becomes wildly successful and the platform itself grows.

Obviously Rovio is an extreme case, but many other smaller studios have managed to forge their own destinies in a similar fashion. Studios like Spry Fox and NimbleBit make the games they want to make, how they want to make them, with whatever business model they desire, and it’s no big deal. So they are free to innovate and they do. Same for us at Jawfish.

Enter the Micros

Console makers do realize that they’ve painted themselves into a corner, want to change and get some press goodwill. Yet not to the extent that they detonate their existing business. Especially not when many of their fans prefer to cheer for stasis and buy into predictable franchises over innovation.

I don’t envy them, but that gap is why microconsoles are a real threat. OUYA, GamePop, GameStick, Mad Catz and whatever Google might be cooking up are relatively unencumbered by old constraints, and therefore able to empower indies in the first sense. The fact that they’re mostly using a common operating system helps, but their main advantage is the potential flexibility and the focus that being simple provides.

The first generation of microconsole hardware is less than stellar. Of course it is. The idea is brand new and still finding its way. The OUYA’s joypad, for example, isn’t good. The processors for most microconsoles are probably underpowered, and there are lots of early firmware and operating system issues. Look past these early-phase issues, however, and take in the longer view.

Microconsoles can iterate on hardware quickly, like phone makers, where Sony is stuck with a fixed spec for the next seven years with PS4. Big consoles have to be static because big publishers (like Activision) need the spec to be stable enough to master in order to make the next Call of Duty. A SuperCell, on the other hand, doesn’t. An iPad doesn’t. Indeed most every other form of electronics has figured out how to move to an annualized cycle except console makers.

Beyond hardware issues the next issue is the customer. Who are microconsoles for? Everyone. Everyone who likes to play games cheaply, for fun, with simple controllers and low (or free) prices. As we’ve seen on phone, tablet and Facebook, that translates to a hell of a lot of people. And before we get too worried about TV being somehow special in this regard, consider that that is a self-cyclical piece of thinking born of consoles being pretty bad as devices. They are only now getting into the idea that maybe they should have power/resume states like every other device you’ve owned since the turn of the millennium. Part of the reason why they have that special gamer aura is because they are a hassle. There’s no reason for micros to follow the same path.

Power Shifts

The future that I see for console gaming is one where hardware incrementally cedes power to software. Pushed by microconsoles offering a vastly cheaper option on the one hand, and developers of incredible games with the right business models on the other, the prospect of all three current console platform holders being reduced to only vertically satisfying their core fans is very real. The prospect of big publishers taking a bath is also very real.

It will take a couple of iterations to get their hardware and business models right. It may take the entrance of a big player like Google or Samsung to validate it (much as Amazon did for ebooks). There will also be that initial flurry of press coverage that will swamp all channels with talk of PS4 vs X1 (and ill-advisedly lamenting Nintendo) for the next 18 months. That will cover over the real story to an extent, allowing OUYA et al room to breathe and pivot.

But in the medium term? The new SuperCells will not be coming from these revamped “indie” console offerings. They’ll come from a very different kind of device entirely.

(If you’d like to hear more, come see me talk about microconsoles some more at Casual Connect this week in San Francisco.)

Editor’s note: Robby Walker is co-founder and CTO of Cue. His previous company, Zenter, sold to Google in 2007. Follow him on Twitter @rwalker.

Startups like Stripe, Weebly, and Cue have spent weeks of valuable engineering time building programming challenges. And tens of thousands of engineers spend their valuable personal time playing them. Why? Because programming challenges require coding ability (just like startups). Also, challenges are fun because the participant gets to solve problems quickly (just like startups) unhindered by anything other than their own ability (unlike big companies).

Programming challenges are a fantastic way to connect great people with great jobs, particularly great jobs at startups. For example, more than half of the hires at Cue have come from our two programming challenges.

Competition: Stripe’s Capture The Flag

Stripe has run two massive capture the flag contests: extremely elaborate security challenges that test an attacker’s ability to discover and exploit security holes. CTFs are really hard. Really. Hard. So hard, in fact, that out of 10,000 entrants in the first CTF, there were only 200 completions.

Greg Brockman, who ran the CTFs, remembers that it took a lot of work. “We pulled all-nighters to get it ready, and then had to deal with babysitting the machines as things were getting forkbombed.” Greg noted, “we were very conscious of security and about separating the CTF from Stripe itself.”

The CTF took on a life of its own. “People reimplemented the levels and hosted them elsewhere… we expected 10-100 people to poke around… and then O(10k) people did it,” Greg said.

Stripe’s second CTF featured a leaderboard and a pre-announced start time, making it a race to the finish. “We couldn’t go to sleep until someone had solved it, otherwise maybe it’s just too hard” recalled Greg, but finally a user, identified as “wgrant,” solved the challenge, and the Stripe team “went home and slept.”

Stripe attracted people who loved to code by making their challenge really hard (also by giving them t-shirts). The company has made several hires through the CTF.

Curiosity: Weebly’s Easter Egg

On the other side of the programming challenge spectrum is the single, innocuous line in Weebly’s job listing for a front-end web engineer: There is a puzzle embedded in our jobs page…

This is nerd sniping at its finest – a thread that many engineers can’t help but tug. Solving the puzzle requires skills that are necessary to work at Weebly, including using a JavaScript console and debugging HTTP traffic.

Weebly’s first engineering hire found Weebly because of the puzzle, and every subsequent hire must complete it as a prerequisite of being hired.

Despite being a requirement, most people don’t see it that way. CEO David Rusenko notes, “It’s seen less as a gatekeeper and more as a fun thing. It attracts great people instead of cutting down on the applications.”

Nostalgia: The Colossal Cue Adventure

Last month at Cue, we released The Colossal Cue Adventure: It’s part programming challenge and part homage to text-adventure games of the 70s and 80s; and it’s all bad jokes.

One of the things we have learned about how to make a successful programming challenge is to make it fun. When the Cue Adventure hit Hacker News, the comments section quickly evolved into a nostalgia board for Zork, one of the games that the adventure emulates. From a programming perspective our adventure is pretty easy – we added a bonus level for the diehards – but we mostly just wanted to strike up a conversation with like-minded people. A person who spends time writing code to complete an old-school game is a person we want to talk to.

The best startups create an environment where each team member is their own limiting factor. Not politics, overwrought processes or organizational apathy. This is why talented people join startups, forsaking giant salaries and free massages for the opportunity to ship amazing solutions to hard problems on a daily basis. It’s this kind of person who is attracted to a programming challenge.

If you’re an engineer looking for a new opportunity, consider trying a few programming challenges to see what you can learn about your potential employer. If you’re a founder considering launching a challenge, do it. Make sure it stands out somehow. Each of the above challenges stood out in some way – nostalgia, competition, or curiosity.

Weebly, Stripe and Cue still get results from traditional hiring methods, like recruiters and employee referral bonuses. Comparatively, a programming challenge may seem like an incredibly large investment of time and effort. Devoting a week or more to creating a challenge is a difficult decision – especially when you’re already short-staffed! While programming challenges are admittedly high effort, they are also high reward (just like startups).