Facebook Oversight Board 1.0

Facebook created an Oversight Board to "help Facebook answer some of the most difficult questions around freedom of expression online: what to take down, what to leave up and why." The board has the authority to decide whether Facebook and Instagram should allow or remove content.

This is a commendable start. But it is only a tiny part of much bigger, more important, questions that need to be answered.

There is a need for an independent oversight board to opine on these bigger questions. It is not realistic for this board to have ultimate authority over what Facebook does, but it can at least be transparent in its views as defined in the current oversight board:

"The board is committed to publicly sharing written statements about its decisions and rationale. Each decision will be published and archived on the board's website. Additionally, the board will release annual reports about its work."

10 questions for the Facebook Oversight Board 2.0

Here are ten suggested questions that such an oversight board might address:

1. At what point are we too powerful?

Is there a point at which we become too powerful for social or economic good in the nations where we operate? If so, what metrics could we use to measure this and what actions could we then take to mitigate this?

2. What decisions should we make vs others making them for us?

With our first Oversight Board we’ve given an external, independent, body powers over the decisions we make. How far should this be extended to other areas? For example, is it appropriate that we get to decide whether, and when, we carry political advertising or not?

3. How do we balance ‘engagement’ vs ‘addiction’ in our products?

We employ neuroscientists, among others, to help us make our experiences more compelling. But at what point does this become manipulation or a drive towards addiction? How can we monitor and measure this balance and what steps can we take to redress anything that might risk being psychologically damaging?

4. What is the right trade-off between encryption and accountability?

If we use end-to-end encryption then we might protect free speech but it also means we can’t protect users or comply with laws which require accountability and record-keeping that law-enforcement authorities need access to. What is the right balance here? Can we make recommendations on this to help governments and legislators tread the right path?

5. What great responsibilities should come with our great power?

Given our great power, are there things we should take more explicit responsibility for? If so, what could we focus on to improve social good? How far can we help solve the problem of fake news and misinformation, for example? Or could we fund and publish studies and research into how technology can make humans happier, more fulfilled, vs more anxious, lonely etc?

6. How far can we screen for under-age use of our products?

We are not alone as a tech company in having under-age users but how much more should we do to try and prevent this? Is there a better way to protect minors? What more might we do if we discover underage use?

7. What is the right balance between automation and human involvement?

Technologies like machine learning and artificial intelligence are very powerful for automation, including decision making. But what is our framework for deciding if this goes too far? At what point do we think human agency, intentionality, and conscious control vs. automation is required? Can we describe and codify this so we can also monitor it and act upon it?

8. What is the right level of targeting and personalisation?

We can hyper-target users based on all the things we know about them. But at what point does this risk that individual living in an ‘echo chamber’ or ‘filter bubble’ that is damaging for them or society? Do we have a responsibility for encouraging diversity of views, serendipity of experience, collectivism rather than individualism? If so, can we develop our technology and algorithms in this direction?

9. How hard should we try to win back lost users?

Users of our products may delete them, unsubscribe, or stop using them. What do we believe is the acceptable level of effort to win them back? What is the right balance between our commercial interests and a user who might be trying to reduce their dependence, or time spent online?

10. Can we commit to making our opinions public and transparent?

These are hard questions and there are no obvious ‘right’ answers. They apply to many other companies, not just Facebook. But they are important for the future of our society. Are we brave enough to make our independent views, opinions and recommendations public so that democratic debate and public scrutiny can help refine and guide our decisions?