Resignation with the breach

by Rich on March 6, 2014

Target’s CIO has resigned following their massive credit card breach. This is pretty unsurprising. When I did security management I knew I’d be out the door if we had a credit card breach, and the worst-possible outcome of what I could have been associated with is dwarfed by Target’s incident.

What’s interesting to me, though, is all of the talk about responsibility, the value of PCI, and whose failing it was. There is no question that Target (and Neiman Marcus, and plenty of other merchants) failed at security when they lost control of sensitive data they were trusted and required to protect. I’ll get back to Target, because there were a lot of failures involved that went well beyond the merchants.

VeriFone failed when they released a vulnerable system. They provided the POS and card-swipe devices to Target, Neiman Marcus, and almost everywhere else on earth. The next time you pay for something in the US, look at the credit card terminal. It probably has a VeriFone logo on it, which means that merchant is likely as vulnerable as Target was.

VeriFone made a couple of critical mistakes. They failed to encrypt card info soon enough. The attackers’ software was able to grab card information from the terminal before it was encrypted. I’m not clear on whether that was still on the terminal itself (i.e. the swipe device) or on the cash register, but it doesn’t matter. It made it into memory in an application on some box where the attacker could see it and grab it. VeriFone also failed in permitting the attackers’ to get their software running on the device in the first place. It clearly wasn’t supposed to be there, and the system didn’t prevent its installation, and either didn’t detect it or didn’t think it was important. That’s a bigger failing than the encryption one.

PCI standards failed. I don’t know (and don’t care) whether Target was PCI compliant. The nature of the attack was more sophisticated than PCI was prepared to cope with. Let me say that again:

The nature of the attack was more sophisticated than PCI was prepared to cope with.

That’s pretty amazing, when you think about it. If you’ve never dealt with PCI (congrats, you were smarter than I was) it’s a beast of a standard. It is the most prescriptive, the most restrictive, the most rigorous, the most complicated, and the most fully audited security standard in the private sector. I know of many companies that stalled for years in even trying to achieve PCI compliance because it’s really hard, really expensive, and really unpleasant. Now that I no longer have to manage to it myself, I’m glad for all of those things. Credit card info is extremely valuable and it’s very disruptive to the lives of victims when thieves get their hands on it. Merchants don’t lose out, customers do, and a repressive security standard balances what is otherwise a very asymmetrical risk — it forces merchants to care.

And yet, with all of its dictates and difficulties, PCI failed. It didn’t address this incident.

I’m sure someone could find a requirement in PCI that could be interpreted in a way which applies. My best guess is it would be somewhere related to monitoring for unauthorized system changes. That’s probably syntactically correct, but it misses the bigger point. Nobody managed PCI that way. And that’s where you find the biggest failing.

Security management practices failed.

There are all sorts of ways to measure the success of a security management program. The most desirable and least useful is “did the bad guys get my stuff”. It’s least useful because it only tells you when you’re doing a bad job, never when you’re doing a good one, because you don’t know anything about the quality of the bad guys. So that’s not how people manage.

Instead, they tend to follow two basic measurements:

  • Am I compliant with the necessary standards?
  • Am I applying a reasonable and appropriate level of security?

There are countless KPIs, processes, goals, assessments, TPS reports, and dashboards used in actual practice, but every one of those matter-of-fact methods is a reflection of the two basic measurements.

We already know that the first wasn’t enough, since PCI failed. Neither was the second, not because Target wasn’t reasonable and appropriate in their security practices, but because nobody can realistically define what reasonable is.

The idea of reasonable security is both specific and nebulous, like obviousness in patents. If a skilled practitioner would find measures warranted in a given situation, reasonable security includes them. That can balance value, threats, risks, costs, and all of the other knobs you turn when making security decisions.

The pace at which threats are evolving is mind boggling. The value of an asset to an owner varies from the value of an asset to an attacker. The people doing the defending can’t see into the black markets and criminal underworld driving threats and shifting objectives, so they can’t get anything close to a fair assessment of the risk.

Further, they evaluate their risks, not systemic ones. Target lost a lot of credit card numbers, but they didn’t care at all about Neiman Marcus, or Michaels, or anyone else. VeriFone did, but VeriFone’s exposure in the attack was dramatically different than the merchants’.

Risk always looks simple. Likelihood of loss * Amount of loss = Risk. Target knew what the amount of loss would be, because the number of credit cards was countable. Likelihood of loss, though, encompasses more variables than any security manager can identify, much less account for. Value of the asset to an attacker, resources available to an attacker, applicability of an attack across ecosystems, aggregate value of assets within an ecosystem, hardware-level operations of commercial security devices, patience of an attacker, effectiveness of vendor security and trustworthiness of vendor code. You make a million assumptions in security management or you never make a decision.

And so, with all of that, I can come back to Target. VeriFone’s failings, PCI’s failings, the security community and the notion of reasonable action or best practice’s failings, the industry’s failing to clearly articulate the threat environment, and Target, who had to make decisions in all of this.

Did Target fail? Absolutely. They lost something like 110 million credit cards.

Did they fail doing everything right? I can’t say if they did, but they sure could have.

 

{ Comments on this entry are closed }

Yesterday I started listening to a new writing podcast. It features interviews with writers and covers fiction, non-fiction, screenwriting, and probably anything else you can scribble down.The first episode I pulled down was on “Selling your work”.

Unfortunately, it lacked a critical subtitle: “We know nothing about this.”

I’m dramatizing here, but it’s remarkably close.

Host: How did you sell that first novel?

Interviewee: Well, it was associated with a TV show, so I called up an agent, and a week later he called me back and said it was sold.

Host: Wow, that seems fast!

Interviewee: Does it? Cool.

Host: How was the advance?

Interviewee: Small, but it was my first novel, so I took whatever they offered. [NB, this is the only part of the discussion with actual content.]

Host: And how about royalties? What’s your rate? 15%?

Interviewee: I don’t think so. It’s probably more like 8%. Is that right? Maybe 7.5% I don’t really know.

Host: Can you renegotiate? I don’t know what’s standard. Is that right?

Interviewee: No clue. I get a statement every six months that says something about royalties. It probably explains things.

I had to stop listening there. Can you imagine anyone peddling advice doing that?

“Hi, welcome to Car Talk. We really don’t know anything about your car, so take it to your mechanic and pay him whatever he wants — that’s our advice. And now, the puzzler!”

“‘Dear Abby, how do I fix my problem with my family?’ Wow, that’s a tricky one. Did you try hoping that it will get better on its own?”

“Well, Mr. Jones, I’ve reviewed the contract. In my professional opinion, I think you should probably read it before you sign it. There might be a problem in there, but I couldn’t really say?”

“Your test results came in, and I have them here. Does that seem normal to you for a kidney?”

If your goal is to discuss how to treat writing as a business, you need to at least pretend to treat it like a business. A kid mowing lawns for the summer is more rigorous. Working the numbers is not shameful, it’s your responsibility.

{ Comments on this entry are closed }

Google Reader’s move from active product to Google Hospice caused a great mix of despair at the loss of a great tool and hope for the future. Without a highly-functional but traditionally oriented market dominator, RSS will finally have a chance to grow up and be everything Reader users never knew was possible.

I’m a big Reader users and I’m going to continue to use it until the last possible moment. I think one place to watch for direction on RSS’s rebirth, if they can get their act together, is Yahoo.

Lots of companies make RSS readers that give me my news. Reader is dull and super functional, Flipboard is nine kinds of slick. All of them, though, are on the output-end of an RSS feed. Yahoo Pipes isn’t a reader, it’s a creator. It consumes RSS feeds, queries web data, parses sites, filters results, and presents it all in any format you want.

Yahoo Pipes Custom Build My Affleck/Franco-Free Movie News

 

I can see that as a web page, an RSS feed, query it as JSON, or just about anything else.

Pipes doesn’t solve the problem of how to replace Google Reader, because replacing Reader isn’t the big problem. How we get news is the big problem. Yahoo and Google both understand really well how to manipulate RSS. Google’s taking away a tool that let you read it, Yahoo, for several years, as supported a tool that lets you make it.

Yahoo did a great job reviving a foundering Flickr. If they can get it together, they have everything they need to not just save the news reader, but wholly remake it.

 

{ Comments on this entry are closed }

Electronic Voting Is A Stupid Idea

by Rich on November 6, 2012

MSNBC posted a story with video about an electronic touch-screen voting machine changing a clear selection for one US presidential candidate into (at least apparently) a vote for another. This pretty well lays the groundwork for the reasons electronic voting is a bad idea.

About as reliable as electronic voting

There are all kinds of great things we can do — and do better — with electronic systems than with traditional pen and paper. Many of these are highly sensitive jobs with a lot of time, money, personal, professional, or national interest at stake. Finance, health care, even launching nuclear missiles are all better and more reliable thanks to electronic automation.

Voting isn’t.

I recently got a call from my Visa card’s issuing bank asking if I’d just spent $1600 at an Apple store in Belgium. Since I’ve never been to Belgium (and haven’t been to Europe in more than twenty years), I was confident the answer was no. They were able to reverse the transaction and save me from the fraudulent use of my credit card.

That can’t happen with electronic voting. There are two reasons.

Voting is anonymous

Systems which process sensitive transactions rely on strong identification of the participants in those transactions. Am I really the owner of this bank account, this health record? Confirmation that I am authorized to perform an action is essential to the transaction — not just of entering it, but of auditing it afterwards. Consider the Visa example. Someone had a fraudulent copy of my credit card in Belgium. It was sufficiently good (or they were in cahoots with a store employee) to allow a transaction to proceed. That was a failure in authenticating the card holder. That’s bad for Visa (who makes it bad for the merchant). It wasn’t bad for me, though, as Visa’s process to verify the transaction confirmed that I had not participated.

Identification + Audit = Security (before and after the fact)

Votes must be separate from voting systems

My vote is the record of my intent at the polls. The system which records my vote is an accounting system which tracks those records. In all-electronic voting, you can’t separate those, as my use of the voting system is direct input into the accounting system. There’s no paper trail to fall back on. A failure of the accounting system is a failure of the record itself — as you can see in the video linked above.

There are all kinds of ways that system can fail:

  • A programming error — and there are always programming errors
  • Deliberately malicious programming intended to manipulate voting records. In large complex systems (like, say, voting machines) these can be very difficult to detect
  • Manipulation of the voting machine by a knowledgeable user to modify voting records (over count, under count, change totals, etc.) This is typically related to programming errors, but the effect is at the “counting” end rather than the “voting” end

Programmers and security experts can do a lot of neat tricks to minimize these issues, but they can only reduce them. And because the record of the vote is created by the potentially compromised system, it can be very difficult, if not impossible, to detect the nature of the error or manipulation.

Compare that to paper votes. My home state of Minnesota uses paper bubble forms which are scanned on submission. This is widely regarded as one of the most reliable ways of conducting elections. I fill out my paper form and feed it into the scanner, which tallies my vote. Simple and fast.

When everything works, it works as quickly and as well as all-electronic voting.

And then there’s a recount.

They run the ballots back through the scanners (maybe the same scanners, maybe different ones) and get another total. They can compare objective results. They can count by hand if necessary (though, personally, I think that’s less likely to be reliable than by machine, but that’s a different discussion).

In all cases, the integrity of my original ballot is maintained. The only way that changes is catastrophic failure of the scanner which destroys the paper, in which case the failure is limited to a single vote lost, or malfeasance of an election judge who modifies my ballot. In that case, the scope of their bad actions are more easily limited. It’s hard to modify a large number of paper records in a room full of skeptical onlookers (and opposite party judges) than it is a large number of electronic records. With the right system failures, those can be changed with the touch of a button.

With credit cards, paper slips are still a last-line guarantee in some cases. I disputed a charge made at an out-of-state florist. I’d never used it, nor had my wife, and neither of us were in that state at that time. Visa requested a signature slip from the merchant, who couldn’t provide it. Problem solved, for me anyway. The integrity of my system was maintained. Nobody was able to misappropriate my money (or vote).

That can’t work in an anonymous voting system. It would be like trying to verify credit card transactions without knowing the card numbers, the shoppers, or the authorization codes. There’s no way to distinguish between legitimate and illegitimate votes. That may be true of paper as well, but the scale on which electronic votes can be manipulated isn’t contained by physics/observability as it is with physical votes.

With electronic transaction systems where you only get one shot per participant, you need to contain errors. You can’t do that if you can’t separate the record from the accounting system. You can’t do that if you can’t provide limits on the scope of record modifications. Cryptographers and computer scientists can do really cool things to contain those problems, but they can’t change the fundamentals.

Electronic systems are really bad at both of the main jobs we need voting systems to do.

{ Comments on this entry are closed }

Who knows what evil lurks in the hearts of men? I imagine anyone taking a large investment from Microsoft spends at least a moment wondering what The Shadow knows that they don’t.

Microsoft’s $300 million investment in Barnes and Noble’s Nook subsidiary turned heads, and is seen as an interesting, if confusing effort for Microsoft to get back into ebooks, and to find early traction for Windows 8 devices.

I think there’s a much better reason for them to have made the investment.

Apple’s wild success with its mobile devices is closely tied to the strength of iTunes stores: apps, music, video, and, to a lesser extent, books. Anyone who uses an Apple device knows where to go and knows how to get what they want. iTunes has its problems, but it’s available, it’s comprehensive, and it works.

Google, with their rebranding of the Android Marketplace as Play, is trying to build the ecosystem Apple has in the iTunes stores. They see the value in a one stop shop.

Amazon and Kindle are exactly the same.

So what about Microsoft and Barnes and Noble? Microsoft has an app store for Windows Phone, but they don’t have all of the other content. Barnes and Noble already sells ebooks on the Nook, plus music and video in physical distribution. It seems a shorter hop to take those existing distribution relationships and turn them into working electronic relationships than it does for Microsoft to create them from whole cloth.

Purchasing content for Nook needs a lot of work and polish. Barnes and Noble has struggled there, but they’re really a retailer, not a software company.

If Microsoft is smart about this, Nook will be the way to buy content for Windows mobile devices. With just a $300 million investment, Microsoft got a big jumpstart on a complete content ecosystem.

{ Comments on this entry are closed }

In a move that should have shocked no one, Apple updated their iBooks Author EULA. The new EULA clearly confirms what calm and patient folks said from the beginning: Apple isn’t interested in restricting your ability to distribute your content. They only want a piece of what is built specifically with iBA:

this restriction will not apply to the content of the work when distributed in a form that does not include files in the .ibooks format generated using iBooks Author. You retain all your rights in the content of your works, and you may distribute such content by any means when it does not include files in the .ibooks format generated by iBooks Author.

Don’t worry, though. I’m sure there’s still something scary hiding under your bed.

{ Comments on this entry are closed }

Stop Hyperventilating Over iBooks Author

by Rich on January 26, 2012

I think everyone is seriously overreacting to the iBooks Author EULA. I don’t particularly like the EULA. I think it’s overly restrictive. But I also think everyone is focusing on the wrong part.

There’s no disagreement about the terms surrounding “Distribution of your Work”. It all hinges on what is “The Work”.

If you’re talking about selling a rich multimedia immersive super whiz-bang text book that takes advantage of all of iBooks Author’s magic, you must sell it through iBookstore. This isn’t (only) because of the EULA. It’s because your book is only viewable inside iBooks. You can’t play it on a Kindle, a Nook, an Fire, or anything else. It’s an Apple proprietary format in exactly the same way that mobi is an Amazon proprietary format. Except for Amazon, it’s proprietary solely for the sake of control. For Apple, .ibooks has unique features impossible in ePub3. So what’s the problem?

If you write the great American novel, or the great Victorian bodice-ripper, or the great vampire/werewolf/zombie death match chronicles, and you produce a .ibooks file with iBA, you must sell that in iBookstore.

But “The Work” is not your copyrighted text. “The Work” is your finished iBooks product (which, for technical reasons, is only usable in iBooks). You can paste your text into Pages, export an ePub2, and sell it on Barnes and Noble, save a .doc and upload it to Smashwords. Sell a PDF on your website to your heart’s content.

Does anyone really believe Apple is going to stop people from doing that? Do they continue to believe it if their tinfoil hats are on correctly?

Apple’s EULA is clumsy in not making that adequately clear, and I strongly believe they will update it to reflect that, but I don’t believe they’re claiming control of your text. They’re claiming distribution control of the finished product produced using a remarkably advanced tool they’ve made available for free which is only usable in their marketplace anyway. If you don’t like that because you want the same product everywhere, make an ePub (still the default format for iBookstore) yourself. You can do it in Pages with just a couple of clicks.

Honestly, just breathe into a paper bag for a minute. You’ll feel better.

{ Comments on this entry are closed }

Apple Adopts Cold War Tactics

by Rich on January 19, 2012

Apple’s recent announcement that it joined the Fair Labor Association is good news for the effort to promote humane labor standards. As a cynical and tactical effort, though, I think it’s comparable to the US military buildup during the Cold War which contributed to the collapse of the Soviet Union.

Apple is the only consumer electronics manufacturer with anything other than razor thin margins. Most manufacturers subsist on single-digit margins and squeak by on volume and struggles for efficiency. Apple, thanks to Tim Cook, is incredibly efficient, and their ability to focus on a small number of products and premium profits (at comparable prices) make their margins the envy of the electronics world. That’s how they sell a small fraction of the world’s mobile phones but rake in 2/3 of the world-wide mobile profit.

With a huge margin advantage, Apple can press the fair labor issue. If they’re able, though public pressure, to set expectations around labor standards for electronics manufacturers, they can raise the cost of manufacturing a few percent. Samsung can’t absorb this increase. Neither can LG, HTC, or HP.

For Apple, a few percent is unfortunate for their bottom line, but they’ll still make a fortune. Competing on labor standards could move Apple from being perceived as a premium price/premium quality participant to comparable or even value priced while maintaining premium quality.

How can HP, Asus, or Samsung compete with that?

{ Comments on this entry are closed }

SOPA and Lost Sales

by Rich on January 16, 2012

A lost sale is customer demand that can’t be filled. SOPA, and most discussions of digital piracy, treat every illegitimate download as a lost sale. I’m not pro-pirate, but that understanding of lost sales is a bad reading of the issue.

Tim O’Reilly nailed the problem in a recent Google+ post: The lack of clear evidence in economic harm due to electronic piracy. There’s plenty of emotional baggage evidence in the form of “he’s watching my show/reading my book/playing my game and never paid for it.” This is absolutely true, absolutely unfortunate, and absolutely indicative of bad action on the part of the pirate.

But illicit use doesn’t translate to lost sales. O’Reilly points out that his books are widely pirated. I’ve been an enthusiastic customer of O’Reilly for well over a decade, and his company produces great books. They are, almost always, the best technical resources on the topics they cover. With a highly technical audience, his customers and potential customers are also the most able to find a way to pirate his company’s material. I have to think that O’Reilly books are pirated at a higher rate than most.

But, he argues, if the pirates would never have bought a copy anyway, there’s no economic harm.

Most people in the world will never buy an O’Reilly book. They’d be better off if they did, but for lack of interest or lack of honesty, they won’t. 0% of those people count as lost sales, so 0% of their activity contributes to economic harm.

When we’re talking about electronic good, that little group in the middle are the ones who hurt O’Reilly’s (or any other producer’s) bottom line. The rest are wrong, annoying, and bad-actors. But they’re not taking anything from O’Reilly in any sense other than an emotional one.

I’m not condoning piracy, but O’Reilly is right that pro-SOPA politicians and media companies are asking the wrong question. They’re assessing the problem based on the size of the circle of crooks, not the subset of crooks who are doing more-than-emotional harm. That’s not a recipe for success.

{ Comments on this entry are closed }

In an interview with Digital Book World, Hyperion CEO Ellen Archer spoke candidly (for a CEO, at least) about the changes to publishing caused by the growth of digital media. It’s a good interview, and well worth a read.

Some of her points showed that she understood the concerns propelling indie publishers better than traditional publishers are expected to understand them. What interested me the most was around funding, sales, and advances. The quote which has been singled out is

We’ve been able to provide advances to authors and unfortunately most of those [advances] don’t drive revenue.

Most of the comments I’ve seen express severely burned bacon. Wading through the commentors’ sarcasm, the complaint with Archer seems to be based on reading this as distaste for having to pay authors. I don’t think that’s what she meant, and I don’t thing it’s fair analysis.

Most traditionally published books don’t earn out past their advances. There are lots of reasons for this which encapsulate many indies’ frustration with traditional publishers, but regardless of the reasons, it’s still true. An advance is exactly what its name implies: an advance against future payments.

Advances are important in publishing risk management for a simple reason. They transfer risk from the author to the publisher. If publishers are losing their taste for taking on that risk, the only one left to bear it is the author. I suppose you could devise some sort of “crop insurance” program for your novel, but good luck getting someone to underwrite it.

If the future of traditional publishing is to expect the author to take on more risk (which is certainly the case in indie publishing), authors ought to reasonably expect more return. That’s the balance. Risk vs. reward.

Ultimately, that’s the publishing calculus, for traditional or indie. Traditional seems primed to shift more of the risk burden toward authors. That means authors should be prepared to pull some of the reward with it.

{ Comments on this entry are closed }