Facebook has never been under more scrutiny, especially this week after the Christchurch atrocity. Speaking at the Green Building Council of Australia’s Transform event this week, a former chief with the global giant stepped through the unintended consequences of the company’s rapid growth and the subsequent lessons for sustainable digital transformation.
Stephen Scheeler came to Australia in 2013 to run Facebook’s Australia and New Zealand business when the company was first listed on the stock market.
He now goes by the title “the Digital CEO”: an advisor, mentor and speaker dedicated to driving the uptake of digital tools in Australian businesses. He says the region is falling behind, putting the economy, businesses and “the future of Australia for our children and grandchildren” at risk.
But as a former employee of Facebook, Scheeler approaches his role with caution.
“What this all boils down to is an ethical question. How in the future should we treat the individual’s privacy and their digital identity?”
Displaying a photo to prove he worked with Mark Zuckerberg, he says that the founder of Facebook is “a genius, not perfect, but smarter than pretty much anyone you’ve ever met. And other geniuses have said that… they’ve had that opinion,” he says.
“I’m also of the opinion that he wants to do social good and change the world for the better. The things that are happening at the moment mean I think Mark [Zuckerberg] is not sleeping well and that he and the entirety of Facebook are trying to find a solution.”
Scheeler uses the tech company’s rapid growth and its various shortcomings to provide powerful context and lessons for businesses and institutions in the process of taking up digital tools.
The platform started off with the intention to connect people – a mission as valiant as “motherhood and apple pie” according to Scheeler.
Although he believes this ethos endures in some capacity, the company has morphed into much more than a platform for users to communicate with one another.
He says there were three key components to Facebook’s rise before the company started to “come apart at the seams”.
The first is that it’s a platform for authentic identity. Unlike Twitter, where there’s a level of anonymity if you choose, the platform doesn’t accommodate fake or anonymous users and seeks to flush them out.
The implication of this, says Scheeler, is that the platform knows exactly who it’s talking to and when, which is extremely appealing to advertisers.
The next step for the tech company was creating personalisation at scale. Using machine learning and algorithms, the company set out to create news feeds where users see only the most “relevant” posts from others.
Finally, the company brought in data science to drive deliberate, data-fuelled growth based on the social imperative that people want to connect with one another.
By drawing out insights from the vast quantities of data becoming available, the company was able to drive user engagement and create the “modern Facebook we have today.”
The result, Scheeler says, is a “personal, daily newspaper for 1.3 billion humans.”
So what could go wrong? What’s a filter bubble?
First, there’s the problem of “filter bubbles” where users are stuck consuming the same ideas and information, which Scheeler says is “great for advertisers but not good for public discourse”.
Why did it go so wrong?
Scheeler believes Facebook became unstuck for a number of reasons.
The company grew too fast for a start, with small teams of inexperienced developers building products and releasing them to millions of people without considering the potential risks and unintended consequences.
These teams also tend to operate on the assumption that people only use the tools for good, Scheeler added.
Essentially, he says the problem was “too much data, not enough ethics.”
How your innocent photo can end up in an unknown narrative
On a macro level, Scheeler says digital disruption devoid of ethics can lead to the erosion of faith in institutions.
There’s also the danger that people’s “nano-bits” of data are being aggregated into “actionable data sets”.
This is essentially when benign data – like a photo of someone walking down the street – becomes knitted together with other data to be used in various unknown applications.
We need trust and privacy – and transparency on our data
Scheeler hopes that in the future, everyone and everything will have a “data identity”. This means “you’ll be able to answer questions about who has your data and what they are doing with it.”
He also says people will also have sovereign control over their data and that data ownership will also be rules-based, not unlike the rules for property ownership.
And data will also be monetisable, or not, depending on what people choose – “most people don’t know how their data is being monetised at the moment.”
He says among important tech-enabled development are protocols that allow machine learning without centralised data storage. Called “federated learning”, these protocols allow data to be moved from company to company without revealing the person’s identity.
He says the outcome will be that people will start moving to organisations that they trust and that are transparent about their activities. This is already starting to happen, he says, with Facebook in the bottom 10 of a list of 100 organisations ranked according to trust.