Learning, working and ?

A lot of my recent thoughts have turned around issues of crowdsourcing and online education.

../worklearn.jpg

One of my co-authors (Markus Krause) is co-organising a workshop on that topic, WorkLearn 2014. It will take place in Pittsburgh November 2-4, as part of the Human Computation conference HCOMP2014. Human Computation is another expression interchangeable for social machine, although it has different connotations.

The stated motivation of the workshop is very ambitious:

The online education and crowdsourcing communities are addressing similar problems in educating, motivating and evaluating students and workers. The online learning community succeeds in increasing the supply side of the cognitively skilled labor market, and the crowdsourcing at scale community creates a larger marketplace for cognitively skilled work.

Linking online platforms for crowd work with platforms for MOOCs has the potential to: provide knowledge and training at a massive scale to contributors; collect data that identify expert skills; engage contributors in simultaneously working and learning in a social environment; and organize large communities around online courses on specific topics. These all provide new opportunities to support and deploy sophisticated algorithms for crowd learning and work.

The most successful example in this direction is of course Duolingo, which helps translate the web while using volunteer labor by language learners. If one omits the learning, the strategy there is not that different from the strategy used by my Coursera coworker Abraham Bernstein to translate books using Amazon Mechanical Turk, and indeed part of his efforts aim to design effective tools to program those social machines (with the programming language CrowdLang).

I have always had some qualms about the ethics of crowdsourcing, even though it can clearly be used for good: the prototypical success story is in the work of Ushahidi during the 2010 Haiti earthquake. I was thus very happy to see over Labor Day 2014 that Michael Bernstein from Stanford announced guidelines for academic requesters on Amazon Mechanical Turk. He explains the rationale for this (a Turker is a worker on a crowdsourcing platform):

An IRB-approved researcher experimented on the [crowdsourcing] platform unannounced. The result was Turker confusion, strife, and wasted time, in a system where time is what it takes to make ends meet.

These guidelines were themselves crowdsourced, designed together with the Turkers (it's only natural!).

At the same time, over the summer, there was a huge controversy over the iffy ethics of social platforms experimentation. This is due to the release at the very end of June 2014 of a Facebook experiment on its users (don't miss the Cornell IRB flowchart there). There are a ton of links about this, but the best is probably the account by Mary L. Gray of an ethics panel that took place at the Microsoft Research Faculty Summit (and unfortunately was published with much delay).

In any case, this should give serious pause to any educator. One can see lots of fields suddenly getting much too close, with very different or inexistent values. Online learners, just as Turkers, are vulnerable. Martin Weller and George Siemens have recently insisted on this.

So, what do you think? Anyone wants to submit a position paper (2 pages) on the topic? Any of my co-learners in MOOCs would like to see what we can do? We could, well... crowdsource it...

(of course, this was due yesterday: official deadline is "September")

Click here so I can tell you about privacy (and invade yours too)

Simply by looking at this page, you agree to this site's privacy policy, which is a copy of the Swiss Railway's statement on the use of Google Analytics:

“This website uses Google Analytics, a web analytics service provided by Google, Inc. (“Google”). Google Analytics uses “cookies”, which are text files placed on your computer, to help the website analyze how users use the site. The information generated by the cookie about your use of the website (including your IP address) will be transmitted to and stored by Google on servers in the United States . Google will use this information for the purpose of evaluating your use of the website, compiling reports on website activity for website operators and providing other services relating to website activity and internet usage. Google may also transfer this information to third parties where required to do so by law, or where such third parties process the information on Google's behalf. Google will not associate your IP address with any other data held by Google. You may refuse the use of cookies by selecting the appropriate settings on your browser, however please note that if you do this you may not be able to use the full functionality of this website. By using this website, you consent to the processing of data about you by Google in the manner and for the purposes set out above.”

Google Analytics Terms of Service, as on the SBB website

The site itself does not collect data. However I do use Google Analytics, which requires me to introduce this policy as part of the service. The service allows me to track users on this website in various ways, and produces beautiful graphics like this one:

../google-analytics.jpg

The comments below are bound by the Disqus Terms of Service, which are available here.

If you don't like it, you can leave. So is the law of the internet.

Edtech policies (part I)

In a previous post, I talked about some of the problems associated with data collection with no clear purpose. In this one, I want to compare two of the big players in edtech on a narrow point, that of comment posts and their associated privacy. This is partly done in response to Bill Fitzgerald's posts on edtech privacy policies, but also as a concerned parent looking a bit far out in the future.

On June 26th Google held an I/O developer conference. I was hoping, like many, some kind of announcement about mooc.org. Not much was said on that, but it was still an instructive watch of the efforts of Google in the MOOC space. I was particularly struck by a comment of Julia Wilkowski (leader in their MOOC project) following a question from the audience. The comment is at 28:35, if it does not play for you:

She informs us that no sophisticated analysis has been performed thus far on the Google MOOC forum posts. There are apparently two reasons: it is complicated, and they are required to delete all the posts after 60 days to comply with their privacy policy. Indeed, Google MOOCs fall under the global umbrella of the Google Privacy Policy, which seems to uniformly apply to all their products. The main reason for this number seems to be technical rather than anything else (the backup system is presumably very complex, extending all the way to physical tapes), since I couldn't find a reference to it in the Google Privacy Policy (and neither could other people, if you, well, google it).

A tad later, Peter Norvig talks about classifiers (similar to those used in the recent Facebook experiment that would make the news 3 or 4 days later), for instance to help determine when a student might be confused, a classic trick in intelligent tutoring systems. He immediately reminds us though:

But there still are a lot of privacy issues involved in what [..] information can you keep, how much can you tie the identity in the forum to the identity of the student, can you tie that to their identity someplace else, and the field as a whole has to come to grips with the privacy issues so we can share and learn what we want without violating privacy.

—Peter Norvig

Coursera is another big player in the market, with a rather different approach. In their Terms of Service and Privacy policy, one can find the following:

../forum-reuse.jpg

The most remarkable sentence here is: We also reserve the right to reuse Forum posts containing Personally Identifiable Information in future versions of the course we offer, and to enhance further course offerings. This sentence is very puzzling to me. What does it mean? The only logical explanation I can offer is that Coursera plans to repopulate forums in later iterations of the course with posts from a previous run, presumably one where there was more emphasis on moderating the course. Ask a dumb question once, and it will be asked again on repeat, in your name, in contexts you don't necessarily know. Share a bit too much of your great idea for a startup, in a context where you feel confortable, well, too bad: it might be reshared again, even if you delete your post at the end of a course (many MOOC students have the feeble illusion that this protects their intellectual property).

The Terms of Service also include the following, which is classically present in many platform disclaimers:

../forum-disclaimer.jpg

The Coursera Terms of Service include numerous such disturbing clauses, as detailed in this wonderful post from August 2012. That post is a very highly recommended read, since it details issues of free speech and academic freedom, a hot button those days. It also includes the following gem:

It is crucial that we pay close attention to the fine print, something unfortunately overshadowed by the immediacy and novelty of Web 2.0 solutions and the latest trends in brand management techniques.

I can only recommend this approach, but it has to be moderated by a quote given earlier in the piece:

Because Coursera mediates between instructor/university and user/student communication, we are dealing with at least four major relationships: user-Coursera, Coursera-instructor/university, user-Coursera-instructor/university, and vice versa. I am mainly focusing on the user-Coursera relation (terms of use and privacy policy), but it should be noted that these are really only separable at the analytical level. In reality, all of these relations are in play at any given time.

This leaves many unanswered questions, which are not easy to address. It requires access to other contracts, between the instructor, the university and Coursera itself.

Only one such contract between a university and Coursera has been discussed on a wide scale, the contract given to the University of Michigan. It was the object of a 2012 Chronicle of Higher Education article (an antiquity in the domain of MOOCs), and is based on a Freedom of Information request for the University of Michigan contracts. More recently, the UCSC Faculty Union has entered negociations with Coursera, that are detailed on its blog. From the outside, these negociations seem very one-sided and highlight differences with the University of Michigan contract:

In the Michigan contract, the instructor grants to COURSERA various rights FOR THE DURATION SUCH CONTENT IS OFFERED THROUGH THE PLATFORM (i.e., very limited transfer of rights). In our contract, in contrast, the rights are granted TO THE UNIVERSITY and this appears to be irrevocable and not connected to the hosting of the course on the Coursera platform.

In other words, the balance of the Coursera contract shifts towards the instructor at the University of Michigan, compared to at UCSC. In the latest Shangai rankings (for the little they are worth), Michigan was ranked 22nd while UCSC was 93rd. Bear in mind that Michigan joined earlier, which might also affect this complex bargaining equation.

A few other contracts have been put online, intentionally or not, and can be found by googling titles etc. I found eight in total, which can serve as evidence of subtle shifts of the Coursera strategy, and also segmentation according to the characteristics of the universities involved (European vs. American, public vs. private, etc). Since many of those contracts are locked under Non Disclosure Agreements, and this has already become a union issue elsewhere, I can only encourage other academics to push for openness of those contracts at their own institutions.

(For a cool application of machine learning to the process of teaching, look also at the video around the 4:30 mark. Google seems to focus there on content rather than users, and to extract values from the student contributions rather than their private data. They effectively intend to crowdsource smarter compilers.)

(I want to thank Ignacio Despujol Zabala for letting me know about this Google I/O session.)

How fast the world has changed

As I am starting to blog, I am taking a course on openness with P2PU. This is helping me think of how to quickly and most efficiently reclaim my domain. For this, I have been looking back at the few notes I have posted around on Facebook. Actually, you can too if you have a Facebook account, as I have just made them viewable to every facebook user. This way, if you are new here, you can get a quick sense of what I am about. Incidentally, due to a little known item in the Facebook privacy policy, this means that all the comments are now public as well (i.e. my friends gave the rights to their comments to Facebook, I gave the right to my original posts to Facebook, and finally Facebook decides to correlate privacy of comments to the privacy settings of the original post). I suppose every platform owner has to make lots of those decisions, which eventually shape the service.

In any case, one of the notes struck me. It was just a link, actually, to a 2008 New York Times story about an unfortunate Walmart employee who was trampled to death on Black Friday. The angle is that bad things happen in the world, and sometimes this can end up on the internet, filmed on crappy cell phone cameras. And then we have to tell kids about all that violence. The undertone of the piece is that we were just starting to grapple with that problem back then. A friend of mine commented and asked for my opinion. The original note is here, with a contemporary screenshot below.

../walmart.jpg

Close to six years later, we are still facing the same problems, of course. Internet is more violent and more invasive than ever. Any platform owner knows the value of good filters to curate content for its users. This content curation can be done jointly by machines and humans, leading to risks of algorithmic bias still misunderstood (Facebook, Twitter). It can also be done exclusively by humans, operating under strict rules. For both posts and comments. This is the model applied by metafilter, leading to high quality output but a relatively weak business model, unfortunately still vulnerable to algorithmic whims.

So you're saying that people tried to use the economies of scale of the internet to disrupt the conventional and somewhat hidebound traditional methods and then it turns out that certain things requiring human eyeballs and judgment do not actually scale along with this stuff and the lesson is that you need to keep people in the mix in not just token ways even if this interferes with your bottom line...? I know that song!
-- Metafilter user and former moderator jessamyn

In any case, eight years after this Facebook note, the world still turns around. People get married, have babies, raise their children. And most people still use Facebook.

"Don't be evil", or how I learned to behave like a startup and love the data

../strangelove.png

When Gmail was opened in 2004, I received invitations early. If I remember well, they came from a friend working at Google who had already snatched a few fun login names. I did the same, and passed on further invitations to my brother and our friends back home.

A year or so later, when my brother was visiting with his friends, we went on a tour of the Googleplex. Randomly passing in front of the cubicle of a homonym, one of the friends suddenly realised why he had not been able to register his own name earlier. In other words, an unknown collision in the physical world had first manifested digitally.

I like to think of those collisions as the digital equivalent of New York overcrowding, trying to fit too many people in just a few login characters.

So which fun pseudonyms did we chose? Which did we consider worthy in this land grab? Certainly many of them were aimed at our shared cultural backgrounds as Belgians in the Silicon Valley. If you had tintin@gmail.com, or frietkot@gmail.com that would be pretty impressive, no? Indeed, we grabbed names of regions, superheroes, movie stars, concepts, etc. We certainly thought this was OK, and didn't reflect more on something that became controversial only later.

One of the logins I grabbed had the name of a Belgian politician, let's call him Some Guy. He was on TV and I thought my friends would get a chuckle if I emailed them from it. Certainly, I might have crossed a moral line already then, but it felt like a very tiny escalation in this virtual land grab.

What did I do with this account? I mostly used it for spam protection. I set it up so that all emails sent there would be forwarded to my default inbox, and gave this address whenever there was a need to a register for a spammy online service. This worked well, possibly because Gmail's algorithms had learned to weigh emails transiting through this address differently and benefited from the additional segmenting.

Around 2008, inevitably, I started receiving emails addressed to That Guy. Those collisions happen to all of us, for all of our email accounts. What is the moral thing to do there? My philosophy is most of the time to let it drop, but sometimes also to reply to the sender telling them that they got the wrong address (due to emails missent to my main account, I must have had to contact a dozen hotels in Quebec by now). In most cases, the only way to know what to do is to read the email, slightly invading this other persons' privacy.

Just like Rachel and her Friends in their New York apartment, we struggle to deal with those privacy collisions, especially when we feel a need to intervene.

For That Guy, it was even easier to feel morally OK about it: I never actively sought the emails, had no way to prevent the mistake, and anyways the emails were from cranks. On top, by that time I had registered to too many services with that pseudonym, which effectively tied my identity to it, with no way to revert the situation. So in effect this data collection was happening, whether I liked it or not, or at least that was my moral justification.

The problem with data is that it leaks. The cranks don't just email one influential person at a time. They email a few, who are susceptible to know each other. As a consequence, in this case, the cranks polluted those recipients' email software with a wrong email address. Of course, in due time, the email autocompletion software of those recipients started tripping them and I started receiving emails from other politicians to That Guy. Algorithmic curation had gone wrong, and actively mislead humans. The fact that these were politicians might have mislead me: I should have made the effort of explaining the awkward situation to That Guy's interlocutors and tried to correct it. But I didn't. Somehow a couple more emails made it to me that were clearly of more social nature. Again, I didn't do anything. This data will not disappear unless actively deleted, and even then I can only be so sure.

At this point you will deservedly think that I am a moron. But was it morally wrong? And when exactly did it go wrong?

Throughout my moral justification was that I was not actively seeking this. Emails would land in my mailbox and I would have to read them to know what to do. Of course, this conveniently ignores what I could have done to prevent those emails to arrive in the first place. Part of my justification was that I wasn't doing anything with the data collected. There was no clear goal, except awareness that this could be used to make a point later, which I guess I am making here now publicly (in fact, I have used this to make this point in private throughout the years).

The more interesting issue here is to understand that this is exactly how many big data companies function. "Don't be evil" Google gobbles data all over the place for purposes that are not always clear at the time, and the justification is often that this was incidental, automated and did not require human intervention. Looking at a corporate setting elevates the stakes, and my feeble moral justifications are not sufficient anymore. It becomes a matter of ethics, which arguably should be that data collection is by default unethical: data should not be kept beyond the time necessary for its intended use, with that use itself subject to precise and established ethical rules. It looks like Google has understood this in some markets, for instance education (unlike other players there), and this will be the topic of a later post.

(Image in the public domain: the Dr Stangelove War Room, which happens to be replicated in the Airbnb HQ)

(Social) teaching machines

I want to take the opportunity of the recent talk by Audrey Watters on Ed-Tech's Monsters to share some of my thoughts.

In her talk, Watters physically bases herself at Bletchley Park, a place of invention, ingenuity and deceit that greatly contributed to the Allies' war effort and incidentally the evolution of computing. She then steps back in time to Ludd and his followers, who rebelled against the introduction of machinery in their work. She then segways into the Frankenstein story of a creation abandoned by its master and finally draws parallels with the situation in ed tech today and the "promises" of teaching machines. This was an impossibly bad and short summary of a very good talk, so I would highly recommend to any reader lost here to go read the original. After doing that, please come back.

During World War II, cryptographers worked at Bletchley Park to decipher German and Japanese secret messages. These were encoded by various versions of a machine called Enigma, a typewriter wired with multiple electrical contacts that constantly shuffled letters around. It is really a quite dumb but very messy and obfuscated process, with one useful property: at any stage it created an involution, so one can use the same machine to both encrypt and decrypt. You can see little Millie demonstrating this in the video above (shot in September 2013 by yours truly).

../bombe-front.jpg

To attack the ciphered messages, the cryptologists at Bletchley Park did not build full on computers, but instead machines that could simulate many Enigmas in parallel (36 Enigmas per Turing Bombe). These also had extra wiring which encoded additional properties of the Enigma protocol. On any given day, around 200 of these machines were used to recover the common settings for all the encrypted messages sent that day.

What has always fascinated me with Blecthley Park is the subtle interplay between humans and the machines. While the heavy computations were done by those Bombes, the British did not seek/manage to automate everything. Some steps were always left to manual labor, most notably message passing and picking the initial input. The initial input was known as a crib, and is essentially an informed guess at partial plain text. It seems to have always been more of an art than a science to obtain, even involving some psychology to know where to look. By message passing, I mean that Bletchley Park was not just a bunch of machines: these were disconnected, so there had to actually be many people transcribing output from one machine, making relatively simple decisions (all lights lit!) and entering that output into the next machine. There were several good reasons to do it this way. It is easier to train staff than build a new and complex sorting machine. On top, war is messy, and the Germans changed their procedures several times, requiring agility in the workflow (the Germans were less likely to change their hardware). Over the course of the war, there was constant prototyping of different workflows around core mechanical infrastructure, and this experience helped abstract the generic modern computer (formally, a Turing Machine) and eventually build it (unlike Charles Babbage's machine and Ada Lovelace's programs which remained theoretical).

The lesson of Bletchley Park is that it is sometimes easier but sufficient to build a social machine rather than a fully automated machine.

A social machine is an environment comprising humans and technology interacting and producing outputs or action which would not be possible without both parties present. The term became popular thanks to Tim Berners-Lee, who anticipated them along with the World Wide Web.

../bombe-back.jpg

Facebook, for instance, serves as a social machine in multiple ways. You can pass on content to your friends, which they can "like" and comment on, they can tag you in pictures, etc. Every such action is logged, and this machine has a unique goal: to know you better and serve you more valuable ads. All the necessary information has to be volunteered by humans because machines would not have been able to guess it on their own. Sometimes this social machine actively uses your friends to disclose information you might have wanted to keep private, even if you are not a Facebook user! Kids ratting on their parents, in other words.

How is this relevant to ed tech? One of the most successful social machines in education is Duolingo. While it offers students the option of learning a language, it is really built with the intention of translating the web, and uses a creative setup to find and motivate participants. Even the course creation is now crowdsourced, via its incubator. MOOCs actually tend to rely heavily on the same type of crowdsourcing. Course creation is crowdsourced to professors, who can create custom social machines tailored to the topic at hand. When the course is run, this machine collects information about who is a good student, who is bad, who is a deep thinker, who is meticulous, who defines their own path, etc. Eventually, the goal might be to evaluate all these characteristics at scale algorithmically (despite all the risks for algorithmic bias that this entails), but the key point is that it can be "faked" at first: via peer feedback and rubric grading, one can use power relationships to inject fairly complicated judgements into this machine, at scale, with little cost.

Similarly, other relatively complex MOOC services are also sometimes crowdsourced, such as translating course materials, or mutual technical support for the professors and students. One can expect that some of these tasks will eventually also be automated: for instance, some MOOC platforms already use intelligent agents (robots masquerading as humans) to answer student questions in the forums.

MOOC platforms offer the option to professors to easily stand up their own social machines. What should be their purpose? Who should be responsible for them?

Watters insist that Luddites were not rejecting technology, but rather rejecting exploitation. Crowdsourcing already carries significant risks of exploitation, particularly in the domain of intellectual property, but this is not the only one. In another talk, she says that "Student data is the new oil". Indeed, this seems to be another path that all the big MOOC providers have chosen so far. A professor building a MOOC only helps the platform collect more private information about its users, maybe even under the guise of improving their user experience. But for what purpose exactly? Which engine is running off this oil? Where is it headed? Is this data helping for research in education? In social science? In human-computer interaction? Or simply for profit, selling that data to the highest bidder/best revenue model, without moral guidance? All these options are actively pursued right now, sometimes simultaneously, and professors preparing a MOOC should give great pause to these issues and think carefully at the setting where they have decided to do so. Possibly they might have to fight for the luxury of picking this setting. Professors have a lot of moral responsibility towards the students (the weakest cogs by far in this social machine), to make sure that the free-education-for-all mantra does not turn into another form of exploitation. Do these professors even fully understand the situation? Do they fully understand how free-is-a-lie? Who carries the responsibility of informing them?

Research conducted without applied ethics is morally bankrupt because when scientists lack morals, outside sources can more easily manipulate their work for destructive purposes. In such situations, scientists are likely to adopt the rationalizations of that party to justify their efforts.

Erica Cook

In the Bletchley Park analogy, this moral responsibility is eclipsed by the dramatic circumstances of war. One person encodes, the other one decodes, some people die, some survive but at least it feels fair (to me at least, maybe blinded by my mathematicians' background). Yet the parallels outlined above remain as strong with Los Alamos and the atomic bomb, where Feynman was reorganising his own social machine to perform simulations of atomic explosions, even holding competitions pitting his chimeric machine against actual IBM computers. Certainly, the ethical questions are more nagging with Los Alamos, but on either side of the Atlantic the machine operators never had the opportunity to raise concerns with what they were contributing to. In an environment full of (male) generals and (male) scientists the machines were mostly "manned" by women, within a society that didn't even pretend to give them an equal voice. After the war, many ethical question hung squarely and solely on the scientists' shoulders.

So if we don't like the current MOOC models, what should be the way forward?

Luddites were seeking to disrupt the technological disruption, and we as professors should seek to do the same.

In fact, one might argue this is part of our job, to help society move forward without fear of a challenge, criticism or controversy, as long as we can back our arguments with evidence. In today's conversation, business logic has misappropriated the words "disruption" and "innovation" and mostly tied them with technology. In fact, disruption is to be found anywhere in academia, if not more in the humanities: "why?" is a more powerful question than "how?". Certainly some MOOC providers have opened themselves up to this disruption themselves, through overinflated claims of fixing with old technology something that was not necessarily broken. MOOCs are a fantastic opportunity to build truly new ways of learning, collaborating, discovering and generally helping society progress through exchange of information.

Getting these improved MOOCs off the ground will be hard. It will require dedication, transparency, freedom to tinker, accountability and tolerance for failure. Above all, it will require rock solid ethical ground, which is too easy to compromise in a competitive environment mixing academia and its "strategic relationships".

Connected Course: Introduction

This is an introductory post for the connected course taking place at connectedcourses.net.

My name is Paul-Olivier Dehaye, I am a research mathematician in Zurich. I have taught for close to 15 years at university level, and two years ago started investing time in online education. Last year I taught a Programming in python SPOC and Teaching goes massive: New skills required on Coursera, as well as supervising several edtech projects with students. I am rerunning the python SPOC in the Fall 2014.

This course seems to match my interests very closely, and I am looking forward to relating the themes explored in the course with my own experiences in higher education. I am hoping to learn a lot, contribute positively to the discussion, and maybe get some peer feedback. Of particular resonance to me are issues of silos, faculty autonomy and independence, vulnerability (of students and instructor), censorship, data privacy, ethics and co-discovery (in addition to co-learning).

I am also interested in the community of this course itself: are there marked differences between STEM and other fields? How to make STEM higher ed professors at research institutions understand the importance of those issues? How to communicate that bad decisions now could significantly affect the future of their profession?

Due to personal reasons, I am not entirely sure of how sustained my effort in the course will be, but will do my best!

I am participating in this course as an individual, a student seeking to learn more, and not acting as a representative of my employer. Undoubtedly though what I learn here I will seek to reapply in my professional life. Should you have any question about my motivation in participating in this course, please ask me below or in private. Thanks!

Keeping a Soul in the Driver's Seat

../railway.jpg

I can't wait for driverless cars. Ten years is the estimate. Combined with car sharing, they will revolutionise our cities and make them much more efficient and livable. They should bring about umitigated good to our society. Yet they still come with ethical challenges, usually categorized as trolley problems.

Wired just published an article called Here's a Terrible Idea: Robot Cars With Adjustable Ethics Settings, outlining the ethical issues involved in substituting human drivers with robots.

In freak accidents, computers would have to take decisions such as killing one motorcyclist without a helmet vs. killing five pedestrians.

The writer raises many such dystopian choices: children vs elderly, us vs others, rich vs poor, etc. He rightfully sees a liability for anyone having to program those decisions. In his opinion, any attempt by the car manufacturer to distantiate itself from lawsuits by offering variable ethical settings to the owner of the car would not decrease the liability of the car manufacturer, and therefore this remains an obstacle to rolling in a driverless car.

The car company has another option, which is missed by the writer: absorb progressively the insurance business.

First off, it's clear that any level of indirection and legal tangling is helpful in freak legal confrontations to shield car manufacturer from legal responsibility towards private individuals. Secondly, the writer does not give enough credit to the creativity of engineers/lawyers/business types.

Why wouldn't they be able to introduce one further level of indirection? The manufacturer could build a "car without a soul".

The car could offer full access proprietary APIs to its raw or slightly processed data, but require linking to an ethical core library before it would start. This ethical core would only be called upon if a future collision is detected, and asked to respond to the really tough questions (or it could be run on a loop validating any driving input). Who would take the liability of writing such a core? Insurance companies would seem like the natural candidate. In fact, this is a very natural extension of their business, litigating for the choices they have coldly programmed in rather than the mistakes made by their clients. It would also make sense to decentralise geographically this ethical core, since driving customs are bound to vary from country to country (think of these comparatively safe Indian drivers or this Russian ninja).

The question is whether insurance companies would be willing to go along. They would certainly feel pressure to adapt to a world of driverless cars, but the brilliant move for the car company would be to promise increased efficiency and reach to the whole insurance industry (more clients), and act as a middleman. By encouraging collaboration between the insurance companies, ostensibly to help them save money on R&D, standardise good practice, exchange regulatory tips, etc, the car company would crowdsource the insurance industry to force itself into obsolescence. This would allow the car company to eventually provide the full product, once all the R&D costs of the fine tuning of the ethical core have been shouldered by the insurance companies. Note that this core would only be ethical in name, as it would have been exclusively fine tuned with cost efficiency in mind.

This assumes there is a car company that is sufficiently dominating the car industry to strong arm insurance companies.

(For other futuristic and "fun" questions on the transformation brought about by driverless cars, see the amazingly cold-blooded If driverless cars save lives, where will we get organs?.)

(Image credit: Wikipedia)

Naiveté, the barber paradox, and frontiers in publishing

../razor.png

Over the turn of the XIXth and XXth century, some of the best mathematicians had to take a hard look back at the foundations of their science. A turning point in the mathematical story below was the 1900 Paris International Congress of Mathematicians. This post is in honor of the ongoing ICM in Seoul.

In the last decade of the XIXth, Cantor had developed what is now called Naive Set Theory. Under his theory, one was allowed to introduce expressions such as \(S = \{ x: P(x)\},\) which defines the set \(S\) of all elements \(x\) having property \(P.\)

Russell quickly indentified a problem with this. This issue has a nice real-life illustration, known as the barber paradox:

Imagine a town where every man keeps himself clean-shaven, either by shaving himself or going to the unique male barber in town. The paradoxical question is then "Who shaves the barber?"

In layman's terms, the paradox does not rely in asking who, but in assuming one can pick a town with such a hair-trimming setup.

In mathematical language, Russell considers the set \(R\) of all the sets which do not contain themselves, and asks whether it does contain itself or not. Both conclusions could be obtained but should be mutually exclusive, a contradiction. This is formulated as:

\begin{equation*} R = \{ x | x \notin x \} \Longrightarrow (R \in R \Leftrightarrow R \notin R). \end{equation*}
../frege.jpg

As an answer to this contradiction, Zermelo and others built a new set theory over the first decade of the XXth century that painted Cantor's original definition as too naive. Indeed, while Cantor assumed just any property \(P\) would do and play nice, Russell had showed that this just would not do. The solution around this problem was to precisely restrict which properties are nice enough to allow in the construction of the naive sets. Still, the logicians involved knew of the importance of these others not-so-nice constructions, and very much wanted a theoretical basis for them. This led to the substantially more complicated notion of proper class. Together with Cantor's prior argument of the diagonal, these additional tools lead to spectacular discoveries that rocked mathematics and computer science a few decades later. These include the Halting problem or Godel's incompleteness theorem.

Russell arrived at this question by writing his Principles of Mathematics, an attempt at formalising fully Cantor's original theory. While the barber question could have looked like little more than some Victorian form of trolling to his peers (maybe due to its reflexivity), he was understood right away: it led Frege for instance to repudiate his own Foundation of Arithmetic while in press.

This carries lessons outside of mathematics too.

Imagine an institution which raises an expectation that its core mission would apply equally to all institutions. It is always instructive and revealing when real world situations turn this expectation around, into a barber paradox.

I will give two quick examples.

The first would be that of a generalist newspaper. A journalist's mission should be to consistently report on the news, including news about their own newspaper's failures. Yet at this mission journalists often fail, mostly through self-censorship. The New York Times innovation report is particularly interesting, since it reveals quite a bit of all the internal tensions between advertising and editorial staff to adapt their newspaper to a changing publishing landscape. Yet we can only read this report because it was leaked, presumably by a journalist to an online outlet! A missed opportunity for the New York Times indeed.

A second example would be that of academic publishing. One would expect researchers to be free to publish in academic journals on the business aspects of academic publishing. In a recent case, four academics submitted an article to a Taylor & Francis journal called Prometheus: Critical Studies in Innovation. The topic was scientific publishing itself. While the article had been accepted by the journal editors, the publisher stalled and censored the publication of some of the information, otherwise available from public sources. Eventually the article was published after some editing and the publisher had to explain its actions. Still, this required the editorial board to threaten to resign en masse, and the publisher was still brazen enough to keep a disclaimer undermining all the research.

While in both cases the institution reacted to protect itself from a perceived threat, ultimately this institution's communication (or lack of it) was possibly more revealing of its own internal logic.

Since we are already talking about barbers, I can't resist to riff a bit more. Russell's paradox leverages a very simple initial statement to the profound conclusion that new mathematical foundations are needed. To reach bold conclusions, simplicity always wins. This might be a corollary of Occam's Razor...

(The comic strip is Logicomix: An Epic Search for Truth, which you can buy on Amazon. Highly recommended!)