I still remember that “aha” moment when, as a young lawyer, I looked back on the couple dozen trade secret disputes I had handled and realized what they all had in common. It was so obvious I couldn’t understand why I hadn’t seen it before. Although every case had its own special facts reflecting unique personalities, technologies and business models, one necessary element was present in every single case. Somebody had done something stupid. And they still do.
Sometimes it’s about what people do when getting ready to leave their job and go into competition. They brazenly solicit customers or foment discontent among the staff they want to recruit. They use the company’s computer system to research and prepare their business plan. They download thousands of confidential files they’re not supposed to have anyway, and then try to cover their tracks by using specialized software – I’m not making this up– called “Evidence Destroyer.”
Sometimes it’s about what they do on the way out the door. Feeling overconfident about having just told their management what they could do with that job, they try to sound important and say that the big customer has already hired them and they’re going to be shipping product the next week. Or they decide at the last minute to grab the full library of all the software they’ve ever written, just as a “personal reference tool.”
Sometimes it’s about what they do or say after they leave. The departed manager who runs into his former boss at the airport and decides to “joke” about how he’s already hired away the best people who will get him to market in record time. Or, while still in touch with the folks he intends to recruit, he sends them a text: “Don’t use email because they can see what we are doing. Use texts instead so they won’t know.” (Surprise, texts are discovered, and ultimately read in court.) Or, having taken the former employer’s confidential drawings, he uses whiteout to cover the old company’s name and substitute the new one.
And it isn’t always the departing employee that behaves in ways that make you slap your forehead in disbelief. A company sues a departing executive and it turns out that she’s never signed a confidentiality agreement. Another one creates a system for classifying and marking secret documents but never enforces it, leading to the assumption that anything unmarked is unprotected. Another hires a technician from a direct competitor, triples his salary and puts him in a room and says, “Invent something,” then acts surprised when caught using the competitor’s secret process. Finally, there are the companies looking at whether to “buy or build” a technology, and they take the people that have been exposed under NDAs and assign them to the in-house development team.
These examples are not made up; they are just a sample of the facts from real cases. It’s unsettling to realize that your chosen professional specialty relies on such consistent human failings.
Why do people do these things? My first assumption was that they just didn’t know the basic ground rules for competing fairly. If they did, these cases would disappear. So I wrote a little book and published it in 1982, figuring that I was doing a public service by teaching people how to avoid trouble. Instead, more cases kept coming all the time. I wasn’t making even a small dent in the problem.
Over time, I developed a new theory that the strongest force in the universe is not hidden inside some black hole, but consists of human denial. We are so good at justifying what we do. We see what we want to see (it’s called confirmation bias) and we have reassuring, private conversations with ourselves. Risk? I can’t be distracted by negative thoughts. My carefully crafted plan makes so much sense. What can go wrong? I’m just engaged in old-fashioned competition. Those morons I’m leaving behind don’t care about this product/market/technology, or they would have promoted me months ago when I floated the idea.
That’s the sort of thinking that can lead to disaster, and it seems hard wired in our brains.
So are we doomed as a species to keep repeating the same kinds of mistakes, getting ourselves into unnecessary problems? Maybe not. There’s a new wave of technology that is inserting itself into our decision loops and offers the potential to protect us from our mistakes. It consists of systems, mainly software, that run silently alongside us while we are working, monitoring what we do from our keyboards and looking for decisions that might imperil the company’s confidential information.
The simplest form of this technology might be the box that pops up on the screen as we are about to send an email, asking us if it needs special marking to go outside the company. Or when we’re saving a document, we might be given some options to classify it according to its sensitivity. More complex systems monitor and record our behavior closely, learning what records we access and how often we download and print documents, triggering an alarm if we behave outside of the norm.
While some of this may seem disconcerting from a personal privacy perspective, let’s leave that issue for now and consider a possible bright side of this pervasive, and growing, intervention in our work lives. Given that we seem to be hopelessly inclined toward avoidable errors, might such systems actually be a great help by warning us before we download all those files to the USB drive, or we email the confidential report to our personal account so we can work on it from home?
At a recent trade secret seminar for lawyers I suggested that as artificial intelligence enables these analytic tools to learn more about our behaviors and how to jump in and prevent disaster, maybe the time will come when technology saves us from ourselves. Just maybe the systems will become so smart that they will compensate for our lack of judgment and protect us. And then in response a colleague pointed out to me that no, human ingenuity has always won the race with machines. We’re just too smart, he said, to stop doing stupid things.