Wouldnât it be nice if we didnât have to think about this? Imagine going to work knowing weâll make money, and help a company make money â and trusting government regulation and taxation will keep our work on the right side of history.
Wouldnât it be nice for companies to know they donât have to âout-ethicsâ their competitors? That they can focus on doing what businesses do well â creating goods, services, jobs, and money â within the bounds of regulation binding not only them, but also the competition?
This post isnât directly concerned with techâs impact outside the US. But itâs concerned with some of the largest tech (and non-tech) companies impacting life around the globe, so it feels relevant.Â
And besides: itâs timely, and I have a ~thoughts~ from my own years of experience đ.
OpenAI recently announced a new internal oversight board, tasked with giving ârecommendations on critical safety and security decisions for all OpenAI projects.â If this is the first youâve heard of a big, moneyed tech company installing some flavor of âethics boardâ to ensure it doesnât run roughshod over societal norms of fair labor, legal standards, not-killing people, and so on, then you might be forgiven for believing OpenAI board might actually accomplish this.
But if youâve paid any attention to the goings-on in Big Tech over the past decade (and not just Big Tech, but Big Business writ large), I hope your eyes rolled as hard against the backs of their sockets as mine did when you heard OpenAIâs announcement.
Astute readers steeped in Silicon Valley âcultureâ may complain here that OpenAI wasnât actually, technically, founded as a for-profit company. And therefore that their internal oversight board will really(!), truly, pinky promise, be different.Â
For one, I doubt it. Everything about OpenAI besides its fuzzy nonprofit status suggests itâs the Current Hot Thing in Silicon Valley, and nothing stays Hot and Current and Disruptive there without heaps and heaps of money getting thrown at it.
And for two, it doesnât really matter. Maybe, maybe, OpenAIâs board will be the first in a long while to actually wield real power, to hold the ability to turn a great corporate ship â even, if needed, away from the guiding lights of Growth and Profit.Â
But even if it does, it would be the first, and last, in a long while. In 2019, Google dissolved an AI ethics board only a week after starting it, and in 2020 blocked publication of an internal research paper highlighting the risks of large language models. The company this January split up yet another internal AI ethics team, following Metaâs lead last fall and Twitterâs the year before. And numerous big tech companies cut DEI programs last year after 2020âs myriad promises.Â
The track record, to say the least, isnât good.
Years ago, I attended a conference in Berlin discussing technology, society, and ethics. A young German woman in a workshop there â a philosophy Masters student, I believe â proposed that big tech companies should hire⌠philosophers. These employees could, she argued, look at the external harms products might create, and steer them in more ethical directions.
I thought her idea was great â in theory.Â
But Iâd seen it fail in practice so often I couldnât keep from objecting. I explained that I worked at Google, as a user researcher on a number of âsocial goodâ projects â perhaps the closest role I can imagine to the philosopher one she proposed. My primary job was to nudge teams toward developing products in more âuser-centricâ directions. And while the recommendations werenât always in service of greater long-run good for society, they were often directionally right.Â
Often these recommendations were adopted. But sometimes they were not â especially when they clashed with âbusiness interests,â those twin drivers of growth and profit. Itâs one thing to have philosophers in a company, and quite another to empower them to overwrite the incentives driving nearly every large corporation â and nearly every team, performance metric, and promotion decision within it. Incentives that, at least for large, public companies like Google and Meta, are in fact legally mandated.
And the German student seemed to understand. She quickly saw how real social change wonât come from within big American companies, because businesses are always gonna business.
So why canât we in America understand the same? Why, especially, canât our media, and government?
Itâs not that I believe large corporations are incapable of doing any good for society, nor that they should be given free reign to do whatever theyâd like.
Itâs that businesses donât exist for the sake of social good: they exist to create jobs, and to turn a profit. Especially, again, if the company is publicly-held.Â
With the exception of a few relatively small B Corp / Public Benefit Corporations like REI, Patagonia, and Ben & Jerryâs, social good in society should be the work of nonprofits and the government. And if those arenât doing a good job of it, then we should do something about it.
As Scott Galloway recently said,Â
âAll of this corporate social responsibility⌠has done nothing but distract the populace from the need to regulate these companies. And all of this notion that we keep hoping that capitalism can serve as an engine of change and for social good â weâve had forty years to do thatâŚ.Â
The majority of companies will do what they need to make more money, full stop. Ford would still be pouring mercury into the river if we allowed them to do that. The trust and safety team of OpenAI should be in Washington, DC and have the ability to tax, prosecute, and imprison.âÂ
It seems weâve somehow forgotten this. (Though Lina Khan and the Biden administration do seem determined to make up for lost time đŞ.)
In a way, those of us in the corporate world should see this as freeing. During my last few years at Google, friends and I would sometimes fret, wondering if we were just part of the problem.
And I remember my grad school roommate years ago arguing with her boyfriend about whether or not it was ethical to join an oil company. They were both earning geology PhDs, and oil exploration jobs are much more lucrative than academic ones. My roommate thought it wrong to help a company extract more oil from the earth, but her boyfriend thought a right-minded individual in an otherwise corrupt company might help steer it in a better direction.Â
He wanted to be the oil companyâs philosopher. Just as the German student proposed.
Wouldnât it be nice, though, if we didnât have to think about this? Imagine going to work knowing weâll make money, and help a company make money â and trusting government regulation and taxation will keep our work on the right side of history.
Wouldnât it be nice for companies to know they donât have to âout-ethicsâ their competitors? That they can focus on doing what businesses do well â creating goods, services, jobs, and money â within the bounds of regulation binding not only them, but also the competition?
Itâs for this very reason we have strict guidelines in football and hockey, basketball and boxing: certain hits are okay, while others are not. Within those bounds, go nuts.
How might American football be played without referees? Most fans would of course prefer the teams avoid extreme violence. Theyâd likely police themselves, and set and follow some kind of voluntary ethical guidelines.
But at the end of the day, fans want their teams to win. And if that requires they break through those self-enforced, nice-to-have rules once in a while?
Well, then.
Song of the Week: Billie Eilish â TV (Maybe Iâm the Problem)
Another day, another Scott Galloway quote. I'm really glad you wrote this and you are on point with that quote and your (may I be hyperbolic?) hatred of corporations self policing/regulating. I don't know about most fans preferring to avoid extreme violence, ever been to the coliseum? Of course you have.