Personas for Privacy
A project's threat model is critical for securing users, but traditional threat models have little utility for usable privacy software. A threat model lives in an idealized world in which a user knows how to properly use a system, but users inevitably stumble into compromising themselves due to negligence or apathy. Examples abound of people making short, predictable passwords, accidentally installing malware, or signing a PGP key improperly. Without adapting the threat model to a specific user's skills, knowledge, and use case, a threat model is an unrealistic world in which to develop. A solution to this "user threat," is provided by the development technique called "Personas."
A "Persona" is a fictional person that represents the user's interests in the development of a software system. While it is not meant to be a real person, it is meant to convey sufficient detail of a fictional person that the developer can begin to hold that person in their mind when specifying a threat model, proposing new functionality, or evaluating existing functionality.
Priv.ly contains two Personas, one for a general non-technical user and the other for a user living under a censorship regime. Each persona induces different choices in system design so an initial Persona must be adopted.
For the purpose of the Priv.ly Project, the target user is Frank Jones, a 22 year old barista living in Chicago, USA. Frank is a person that uses many technologies and applications, but has little comprehension or care for the technical foundation his tools are built on. The reason we chose Frank as a starting point is derived from both necessity and ethics.
Without building for Frank, all the Alices and Bobs of the world will be confined to communicating in a ghetto of cryptographic users. It is a practical necessity to build for Frank for a communication system to be broadly useful. Selecting Frank as the target user also brings into sharp relief a series of ethical choices software project must make. Many projects point to a threat model to absolve the developer of responsibility for the user’s privacy. However, in the professions of medicine, law, finance, and others, you find that the risks posed to clients requires the practitioner to protect the client from themselves. Since users like Frank don't understand the socio-technical nature of electronic privacy, we must endeavor to protect Frank from himself with usability and education.