My motivation in undertaking this research was to improve security mechanisms for young children when creating social connections on mobile games or applications.
The research aim was to examine the use of security mechanisms to authenticate and verify social connections for mobile games and applications aimed at children under 13. My research involved identifying risks faced by children under 13, designing mitigating security mechanisms and a criteria-based guideline to evaluate them against. I developed a prototype application to support my evaluation of the security mechanisms. My results showed that end-to-end encryption and approval of new friendships by holders of parental responsibility are essential security mechanisms to authenticate and verify social connections for mobile games and applications aimed at children under 13. The research concluded that appropriate security mechanisms are not currently in place to authenticate and verify secure social connections for mobile games and applications aimed at children aged under 13, however, it is possible to develop and implement robust security mechanisms.
Support is required for organisations, parents and children through the development of appropriate risk frameworks, parental education and the introduction of additional security mechanisms.
As detailed in Section 2 Literature Review, childrens use of mobile games and applications has grown year-on-year, particular with the increased access and use of tablet devices, while parents continue to have very little knowledge or guidance in managing online risks facing their children. There is no legislative framework that addresses how children should be protected by organisations when using their services. In addition, there is no established approach for organisations to identify and quantify risk for external stakeholders such as children.
My motivation in undertaking this research was to explore how a child could have a safe and secure social networking account so that industry practitioners could build upon this research to ensure robust security mechanisms are implemented.
The research aim was to examine the use of security mechanisms to authenticate and verify social connections for mobile games and applications aimed at children under 13. Three related research objectives were identified: produce a criteria-based evaluation guideline based on the aim of the research; examine and evaluate technical approaches to achieving the aim of the research; and develop an iOS application and associated social graph platform.
Section 3 Practical Work outlines how I extended an established risk framework (COSO) to consider risks faced by children under 13 using mobile games and applications while establishing social connections. I designed a number of mitigating security mechanisms and a criteria-based guideline to evaluate them against. I developed a prototype application to implement the security mechanisms to support my evaluation.
My results showed that two of the five proposed security mechanisms, end-to-end encryption and approval by holders of parental responsibility of new friendships, are essential security mechanisms to authenticate and verify social connections for mobile games and applications aimed at children under 13. Section 4 Results, Analysis and Evaluation details each proposed security mechanisms result against the criteria-based guideline I developed.
The research concluded that appropriate security mechanisms are not currently in place to authenticate and verify secure social connections for mobile games and applications aimed at children aged under 13. It is possible to develop and implement security mechanisms, however, support is required for organisations, parents and children through the development of appropriate risk frameworks, parental education and the introduction of additional security mechanisms. The proposed security mechanisms are effective in minimising the identified risks children face and are feasible to implement by practitioners wishing to develop a friendship service within their game or application. Section 6 Conclusions and Recommendations provides more detail on the conclusions and also details a number of recommendations for further study.
The aim of the literature review is to thoroughly evaluate the relevant legislative frameworks, commercial approaches and academic theories related to the research aim Examine the use of security mechanisms to authenticate and verify social connections for mobile games and applications aimed at children under 13.
The report Enhancing Child Safety & Online Technologies: Final report of the Internet safety technical task force to the Multi-State Working Group on Social Networking of State Attorneys General of the United States concluded that " no single technology reviewed could solve every aspect of online safety for minors, or even one aspect of it one hundred percent of the time."(Force, 2008)
Since the publication in 2008, very little academic research has continued in this specific area. A broad, multidisciplinary range of academic, commercial and governmental sources were investigated, critically reviewed and evaluated to establish and identify an approach to address the research question.Staksrud and Livingstone (Staksrud, 2009) categorised the risks that children face as:
This literature review examines the risks against the existing literature on key security mechanisms intended to minimise them:
Research from the UK communications regulator, the Office of Communications (Ofcom), indicates that while the majority of parents feel they know enough to help their child manage online risks (77%) however nearly half of parents whose children go online (43%) feel their children know more about the internet than they do. This rises to nearly two thirds (62%) of parents feeling less knowledgeable than their children aged 12-15 (OFCOM, 2015b). These statistics imply that parents do not feel full equipped and educated to manage the online risks their children face.
Despite this lack of parental education, Ofcoms Children and parents: Media use and attitudes report 2015 indicated 1 in 3 children in the UK own a tablet and 71% of children live in a home with a tablet (OFCOM, 2015a). This ownership has grown from since 2014, where six in ten (62%) children between 5-15 use a tablet at home, which has risen by half in again since 2013 (42% in 2013). Twice as many children aged 5-15 are using a tablet to go online (42% versus 23% in 2013).
Parents often support their children in violating social media services terms and conditions (Instagram, 2016, Facebook, 2015, Twitter, 2016) (Danah Boyd, 2011). This has allowed more than three-quarters of children aged 10 to 12 in the UK to have social media accounts, even though they are below the age limit, a survey for CBBC Newsround suggested (Coughlan, 2016).
Findings from Ofcom's Childrens online behaviour: issues of risk and trust (OFCOM, 2014b) indicate that children in the 8-11 age range use the internet for entertainment and to have fun. Games tended to dominate the online repertoires of both girls and boys.
In addition, Steeves and Webster concluded that children who have high levels of either online social interaction or identity play are more willing to disclose personal information, and less likely to display privacy-protective behaviors, than those with lower levels of these social activities, independent of the extent to which their parents supervise their online activities (Steeves and Webster, 2007). Indicting that additional privacy protections are required to ensure the safety of children online.
These studies set the context of why this research is important. The studies referenced show a rapid growth of mobile device usage with children, where security is vitally important to their safety, however the literature does not comment on the security mechanisms in place to protect children.
Many social media platforms such as Facebook (Facebook, 2016a), Reddit (Reddit, 2016) and Twitter (Twitter, 2014) engage with external security researchers through bug bounty programs to improve the security control environment. While such activities reduce security breaches they do not stop security vulnerabilities being discovered and reported. In 2015, electronic toy and educational material seller Vtech, had a security breach that resulted in five million customers data being made available, much of the data involved childrens personal data (Kleinman, 2015).
Additional levels of personal data have also been exposed through the use of mobile devices such as the device owners physical location. Mobile geolocation based social platforms, such as Tinder, have allowed any Tinder user to find the location of another tinder user within 100 feet (Veytsman, 2014).
Terms and conditions of major social media platforms (Instagram, 2016, Facebook, 2015, Twitter, 2016) exclude users under the age of 13. Mobile games aimed at children such as Clash of Clans also exclude under 13s, as part of their terms of service (Supercell, 2015). Exclusion appears to be the most prevalent security mechanism for managing risk associated with under-13s.
Boyds research entitled Why parents help their children lie to Facebook about age (Danah Boyd, 2011) found that parents often help their children to bypass age exclusions in terms and conditions in order to allow them to access the service. However, the study did not assess term and condition exclusions or other security mechanisms against a criteria-based guideline to evaluate their appropriateness or effectiveness for under-13s.
There is a large volume of academic literature regarding social media platforms approaches to privacy for users over 13 years of age. A number of studies conclude that there is a shortcoming of the privacy settings (Michelle Madejski, 2011, Hanna Krasnova, 2009) in social media platforms reviewed and that there is a general failure by both the social media platforms themselves and their users in understanding the potentially dangerous implications for individual privacy (Kurt Thomas, 2010) that they pose.
The policy of excluding under-13s as part of terms and conditions has resulted in little attention being paid to social media platforms approaches to security mechanisms for users younger than 13 years of age. However, we can assume that the same privacy issues affecting over 13s also affect under 13s.
Messaging applications such as WhatsApp, Facebooks Messenger Secret Conversations and Apples iMessage have implemented end-to-end encryption of messages between one or more parties (WhatsApp, 2016, Facebook, 2016b, Apple, 2016b). Both Facebook and Apple have implemented a framework that generates a public/private key for each device associated with an account and encrypting each message against each devices keys for delivery. For Apple, when a new device is added to the account, each device is notified of the addition. Facebooks also allows decryption of messages should one participant report another when reporting abusive content to Facebook.
The literature indicates that social media platforms are improving end-to-end encryption as a security mechanism yet the literature does not examine encryption as a security mechanism for establishing social connections.
In summary, organisations rely on bug bounties, terms and conditions exclusions, privacy settings and encryption to protect users however these security mechanisms have not been evaluated in the context of their suitability as security mechanisms for under-13s when establishing social connections.
Article 8 of the EU General Data Protection Regulation (GDPR) (Parliament, 2016) requires that where the personal data of a child under 16 is being processed to provide information society services (for example, online businesses, social networking sites and so on) consent must be obtained from the holder of parental responsibility for the child. Member states are allowed to lower this threshold where appropriate but not below the age of 13. The EU has asserted, Companies are now directly responsible for data protection compliance wherever they are based (and not just their EU-based offices) as long as they are processing EU citizens personal data (Hasan, 2016). The legislation does not address the necessary security mechanisms to implement the requirements.
Within the context of the Internet and other digital services, specific legalisation has been developed to address concerns regarding how a minors data protection and consent would be obtained to use such services. For example, the Audiovisual Media Services Directive (Parliament, 2010) governs EU-wide coordination of national legislation on all audiovisual media, both traditional TV broadcasts and on-demand services.
Ofcom has interpreted the Audiovisual Media Services Directive as requiring a Content Access Control System (CAC System) to verify that the user is aged 18 or over and if verification does not occur then the CAC System must limit access to content through the use of security controls such as passwords or PIN numbers (OFCOM, 2016). While Ofcom is stipulating that security mechanisms be in place for users under 18, the key aim is to restrict access to inappropriate content and does not address risks involved in establishing social connections.
In the United States of America, the U.S. Congress enacted the Childrens Online Privacy Protection Act (COPPA) in 1998, enabling the Federal Trade Commission (FTC) to regulate commercial Web site operators of sites targeted at children or who have actual knowledge of a childs participation. The Act requires web site operators to have processes to ensure they obtain verifiable parental consent prior to the collection and use of information on children under 13 years old (Congress, 1998, Commission, 1998, Matecki, 2010).
The FTC asserts that Foreign-based websites and online services must comply with COPPA if they are directed to children in the United States, or if they knowingly collect personal information from children in the U.S.(Commission, 2015). This assertion has led to companies such as Singapore-based InMobi being investigated and fined for violating COPPA by the FTC (Commission, 2016). COPPA has become the worldwide defacto-legislative framework for safeguarding minors using information services.
|EU GDPR||US COPPA|
|Personal Identifiable||A natural person who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person||(A) a first and last name;
(B) a home or other physical address including street name and name of a city or town;
(C) an e-mail address;
(D) a telephone number;
(E) a Social Security number;
(F) any other identifier that the Commission determines permits the physical or online contacting of a specific individual; or
(G) Information concerning the child or the parents of that child that the website collects online from the child and combines with an identifier described in this paragraph.
|Age of child||Individual under the age of 16 or lower. “Member States may provide by law for a lower age for those purposes provided that such lower age is not below 13 years.”||Individual under the age of 13 or lower|
|Jurisdiction||Worldwide, if an organisation is processing EU citizens’ personal data||Any State or foreign nation|
|Consent by holder of parental responsibility over the child obtained by company/service||Reasonable efforts to verify consent is given or authorised||Any reasonable effort to ensure that a parent of a child receives notice of the operator’s personal information collection, use, and disclosure practices, and authorizes the collection, use, and disclosure, as applicable, of personal information and the subsequent use of that information before that information is collected from that child.|
Despite these legislative frameworks in place, studies such as "Closing the Barn Door: The Effect of Parental Supervision on Canadian Children's Online Privacy" (Steeves and Webster, 2007), Data Mining the Kids: Surveillance and Market Research Strategies in Children's Online Games (Grace Chung, 2005) and Why parents help their children lie to Facebook about age have concluded that The online industrys response to COPPAs under13 rule and verifiable parental consent model is largely proving incompatible, and at times, antithetical to many parents ideas of how to help their children navigate the online world (Danah Boyd, 2011).
These studies underline the ineffectiveness of the legislative measures. Legislative frameworks do not directly address the risks that children face as categorised by Staksrud and Livingstone (Staksrud, 2009). Rather, legalisation continues to compel information society services to implement technical mechanisms based on age-based models.
Islam, Mouratidis and Jürjens set out a framework to support the alignment of secure software engineering with legal regulations (Shareeful Islam, 2010). I reviewed this framework mapping the EU General Data Protection Regulation (GDPR) against a scenario application. This scenario application described a mobile game that could allow children to establish social networking relationships, similar to some existing commercial mobile games, such as Clash of Clans, that have limited social networking features.
Upon analysing the GDPR, I realised that while these regulations would form the basis of guidelines for privacy policies, auditing approaches, technical requirements for account creation, authorisation of a childs account and data processing, the regulation does not address the process of creating social connections. While the framework by Islam, Mouratidis and Jürjens proved to be an interesting approach, as comprehensive and relevant legislation does not exist their framework is not suited to this research study. Once I established that I could not adapt the framework to the research objectives, I abandoned this approach in favour of reviewing control and risk management frameworks and approaches.
I researched existing IT governance and standards frameworks to provide a structured means of examining security measures. I considered common frameworks such as ISO 27001, COBIT and COSO.
Control Objectives for Information and Related Technology (COBIT) is an IT governance control framework. COBIT enables organisations align IT strategy with business goals as well as consider IT regulatory compliance and risk management. COBIT links business and IT goals, defines performance indicators (metrics), assesses process maturity (maturity models) and defines the responsibilities of business and IT process owners. (Association, 2016)
The COSO framework has a broader focus across the organisation. COSO considers
International Standards Organization 27001 (Standardization, 2013) is the international best practice standard for an Information Security Management System (ISMS). ISO 27001 defines methods and practices of implementing information security in organizations with detailed steps on how these implemented. They aim to provide reliable and secure communication and data exchange in organizations. (Arora, 2010)
|Child 1||Child 2||Holder of Parental Responsibility||Social Media Platform Provider|
|ISO||None||None||None||Limited to organisation’s Information Security|
|COBIT||None||None||None||Limited to organisation’s IT governance|
|COSO||None||None||None||End to end enterprise risk management|
The scope assessment of each framework identified that none of the frameworks considered risks to stakeholders outside the organisation. ISO focuses on organisations Information Security while COBIT considers the IT governance of an organisation including Information Security. COSO has a broader scope as it addresses enterprise-wide risk management including IT considerations. Each of these frameworks focuses inwards on the organisation without considering the control needs of external stakeholders such as users other than as risk impacts e.g. regulatory fines or reputational damage for data breaches.
As a result, social media platform providers performing risk and control assessments using existing frameworks would primarily identify, and hence mitigate, risks that they face as distinct from risks posed to their users. Organisations may include security measures that protect users but without using a framework that explicitly considers their needs, organisations cannot adequately design security measures to fully protect users.
While none of the frameworks fully addressed the risks across the user journey, COSO was the broadest in terms of its scope consideration and explicitly required risks to be fully identified and assessed as a precursor to designing security mechanisms. I selected the COSO framework as a basis for my study and expanded it to include the needs of the users as well as the social media platforms needs when evaluating risks and controls.
During the course of my research into risk management, I identified the Institute of Internal Auditors (IIA) as a key source. The IIA is one of the five entities that make up COSO. (Auditors, 2016b).
I looked to model the application using SecureUML, (David Basin, 2003, Torsten Lodderstedt, 2002) a modelling language designed to integrate information relevant to access control into application models defined with the Unified Modelling Language (UML). In their comparison of SecureUML, Mules and PL/SQL security models, Matulevičius, Lakk and Lepmets (Raimundas Matulevičius, 2011) concluded that there is a higher quality for the SecureUML security model regarding UMLsec and PL/SQL at the systems design stage. As Matulevičius notes, SecureUML is basically meant for modelling of solutions especially through Role-based Access Control (RBAC) models. ((Raimundas Matulevičius, 2010).
As I moved to a risk management framework approach, I required a wider analysis technique to identify security requirements based upon the proposed security mechanism. Secure Tropos (Mouratidis, 2013) provided an analysis technique as its approach aims to identify the agents of the system, the roles which actors can play, map agents to roles and visualising the relationships between actors, roles and goals.
While undertaking the literature review it became evident that many related, overlapping topics have not been resolved by legislation, within industry, or academia.
Childrens online usage has grown year-on-year, particular with the increased access and use of tablet devices, while parents continue to have very little knowledge or guidance in managing online risks facing their children.
There is no legislative framework that addresses how children should be protected by organisations when using their services. Equally there is no established approach for organisations to identify and quantify risk for external stakeholders such as children.
As William Heath stated you dont have to look very far into safeguarding to realise that a simple technical mechanism to give an age isnt the same as safeguarding children from the risks that they face. (Elevate, 2015) this statement encapsulates why the relevant legislative frameworks, risk management frameworks, commercial approaches and academic theories need to intersect to answer the research question.There is a gap in the current body of literature regarding the appropriateness of security mechanisms for young children when forming social connections. The research undertaken must directly address these deficiencies while still addressing the types of risks that children face as categorised by Staksrud and Livingstone (Staksrud, 2009).
I examined the typical social media user friendship journeys to help identify risks and security mechanisms. Existing social media platforms have two basic models for friendship within their platforms, bi-directional and unidirectional.
Platforms such as Facebook and LinkedIn utilise bi-directional relationships that requires both parties to accept the relationship before any additional interaction on the social media platform can commence.(Wu, 2012b)
A typical user journey on a social media platform that requires bi-directional friendship would entail:
Social media platforms such as Twitter utilise unidirectional relationships, this follower model allows people to follow and interact with a person, without that person reciprocating or validating the relationship.(Wu, 2012a)
A typical user journey on a social media platform that utilises unidirectional friendship would entail:
No social media platform currently defers to or informs trusted 3rd parties that an account they are responsible for has received a friendship request.
Once I had mapped out the processes for typical user journeys I began considering the associated risks and security mechanisms. I researched the study of risk management to learn about risk and control frameworks with a view to applying them to this research question.
The COSO Framework is a widely used framework for providing robust controls over financial reporting. The COSO internal control framework sets out five components:
The COSO framework is relevant to the end-to-end enterprise risk management of an organisation. While a social medial platform organisation could use the COSO framework to develop its controls for its whole organisation, to address this research question I focused on the Risk Assessment and Control Activities components of the COSO framework. The 2013 revision of the COSO framework sets out 17 principles which support the five components. I identified two principles as part of the Risk Assessment and Control Activities that would provide an approach to identifying risks relating to authenticating social media connections and controls to mitigate them. The principles were:
Principle 7: The organization identifies risks to the achievement of its objectives across the entity and analyses risks as a basis for determining how the risks should be managed.
Principle 10 - The organization selects and develops control activities that contribute to the mitigation of risks to the achievement of objectives to acceptable levels.
I performed a Risk Assessment to identify the risks and their impact and likelihood. I considered the risks that children face on social media, the actors who may cause them, the impact that risk would have upon a child and the likelihood of the risk happening.
The three key risks are:
Content risk is out of scope as this research question does not directly relate to authenticating the content to which children are exposed.
The table below sets out examples of how each of the identified risks could arise and assesses the impact and likelihood. In addition, I applied Turners quantitative measure for assessing risk impact to a project (Turner, 1993).
|Risk type||Who/What causes it?||What makes it possible?||What is the incident and how does it cause harm (impact)?||Likelihood||Overall risk impact|
|Contact Risk||Hacker||Insufficient security systems and monitoring||Compromises system and accesses PII of children and holders of parental responsibility
|Contact Risk||System developer||Insufficient security training and testing||Inadvertently introduces security compromises
|Contact Risk||Eavesdropper||Insufficient protection of connection||Eavesdropping on participant’s communications
|Contact Risk||System failure||Reliance on 3rd party vendors without adequate security testing and security disclosure procedures.||3rd party security compromise (e.g. HeartBleed)
|Contact Risk||Bad Actor||Insufficient security and verification systems in place that allow bad actor to take part||Bad Actor is allowed engage with child participants
|Contact Risk||Child participant||System/ Holders of parental responsibility allows child participant to engage in contact risk with bad actors||Participant allowed to engaging in communication which places them at risk 4||High
|Contact Risk||Child participant||System/ Holders of parental responsibility allows child participant to engage in contact risk with bad actors||Participant allowed to play an active role in risk-taking
|Contact Risk||Holders of parental responsibility||Holders of parental responsibility does not understand privileges provided to child||Holders of parental responsibility unwittingly allows child participant escalated privileges
|Contact Risk||Mobile device is retrieved by Bad Actor||Insufficient security enabled on device. Insufficient security of data held of device||Bad actor has access to mobile device’s data
In reviewing the typical user journeys and inherent risks of social media platforms, I designed secure mechanism to authenticate and verify social connections for mobile games and applications aimed at children under-13. In creating the proposed security mechanisms, I based these upon the IIA definition of a control. “A control is any action taken by management, the board, and other parties to manage risk and increase the likelihood that established objectives and goals will be achieved.” (Auditors, 2016a)
The proposed security mechanisms are as follows:
All user interaction will be encrypted end-to-end using public/private key pairs following a similar security model both Apple and Facebook have designed for their respective messaging services (Apple, 2016b, Facebook, 2016b).
Each account on each device generates a public/private key pair. The private key is stored on the device, within the devices crypto-data store, and never leaves the device. The device/account public key is sent to the services account directory service and is registered as belonging to the device for that users account.
When a user creates a user interaction message, each of the intended recipients public keys is used to encrypt the interaction. Only the recipients private keys can successfully decrypt the interaction message.
Every interaction is transported over SSL and with the database holding the data encrypted at rest.
When creating a friendship, children must be within close vicinity of each other (approximately 5 metres) and each step of the user journey must take place in a small timeframe (total approximately time 10 minutes).
Each user interaction will record the geo-location and time of the event and be compared with the previous interactions location and time.
Requiring physical proximity implies an already existing relationship in the physical world between the children. This approach minimises contact risk as parents are typically aware of who is within direct contact with their children, where they are and when this contact took place.
Developers of the social media platform would need to disable the friendship feature should a child deny the ability for the application or game to access and collect their geolocation.
The inviter and the recipient of the invitation of any friendship connection will be displayed verification codes on their devices that they must physically show each other. These device display verification codes reinforce the physical proximity required to establish a friendship.
On the device of the inviter of the friendship, an invite code would be displayed on the device with instructions to show the indeed recipient. The invite code could simply be six randomly selected numbers. This would produce a total of 1,000,000 unique invite codes (10^6), based on a six-character length invite code.
The invite code requires activation within a short timeframe, approximately ten minutes. This time limit in addition to geography proximity, limits the ability for bad actors to search for random invite codes based solely on invite code alone. Rate limiting invite code searches by devices and accounts would additionally minimise the automatic searching for invite codes by bad actors.
Once an invite code is activated within the timeframe, the recipient is presented with an automatically generated six-character length pin and instructed to show the pin to the inviter. This pin would be based on a random selection of numbers. On the inviters device a ten-digit pin pad would display allowing them to enter the pin.
Following the same friendship model both Facebook and LinkedIn have designed for their respective social media platforms (Wu, 2012a). Friendships must be of a bidirectional relationship; children cannot interact with each other until both have approved the relationship. This approach would ensure that a trusted relationship is agreed and would eliminate any anonymous or unknown followers.
By not allowing anonymous followers, the approach would limit contact risk by bad actors and those that could encourage conduct risk behaviour.
Holders of parental responsibility for a child must engage in the completion and verification of a friendship. Holders would be notified of the intention of a friendship request and a holder from each child must approve the request before the child can interact with each other in game.
The notification would specify who initiated the request, the general location the request came from (nearest Town and Country) and the time the request took place. Holders of parental responsibility can view and end any friendships of the children they are responsible for. Once the friendship has ended, no notification is sent to either the child or their holders of parental responsibility.
The implementation of these security mechanisms would alter the friendship user journeys that we considered earlier in this Section. Alice and Bob wish to become friends within a game or application. The user-journey that they must now undertake to become friends is as follows:
To evaluate the proposed security mechanisms to authenticate and verify social connections, I developed a prototype to test the effectiveness, feasibility and auditability of each security mechanism. The scope of the prototype is intended to only evaluate the proposed user journey as it pertains to the objectives of the research and the security mechanisms outlined.
The development approach I took to develop the prototype application followed the approach.
To understand the data model required to develop the prototype, I visualised the relationship between children and holders of parental responsibility as a social graph diagram. This visualisation allowed me to develop the underlying data rules required for the prototype. The prototypes domain model would require
The prototype was developed for iOS devices only, this decision was based on the availability of mobile devices that I had access to build and test with. However, the proposed security mechanisms and their technical implementation are agnostic of operating system and mobile device type and could be implemented in the future on Android and Microsoft based mobile devices.
The iOS prototype application communicates with a Friendship web service. This web service is a REST based API that stores and manages users data such as users public keys, and manages their relationship with one another. The service is accessed only over SSL.
Modelling the friendship web service, I utilised the Secure Tropos analysis technique to identify security requirements. The analysis technique aims to identify:
Agents in the Secure Tropos analysis technique refer to actual participants or software within a system.
|Agent||Description||Abilities||Important Features||Certifications / Accreditations|
|Child||Core user of the game||Interact with the Games Service Agent||Must be under 13 years of age.
Have at least one ‘holder of parental responsibility’ linked to their account.
|Guardian||User that is linked to the Child agent as ‘holder of parental responsibility’||Ability to control the access and permissions of their linked Child Agents||Must be verified as being over 13 years of age.
Must be linked to one or more Child Agent accounts
|Certified as a ‘holder of parental responsibility’ over the child as per the EU General Data Protection Regulation (GDPR)|
|Game Web Service||Web service that manages the game state||Allows Child agents to access and ‘play’ game||All agents access the service must be authenticated.|
|Game developers and administrators||Develops and administers the game and it’s infrastructure||Develops game functionality
Administrates the game’s infrastructure
Manages the game’s economy
Investigates and resolves any in-game harassment, offensive material and/or behaviour
|All staff members have been verified to official standards that they are allowed to work with vulnerable groups such as children. For example, The UK’s Disclosure and Barring Service.|
Roles in the Secure Tropos analysis technique refer are an abstraction of the agents, grouping them into roles of responsibility within the system. Roles will typically be required to produce generated resources as part of fulfilling a goal within the system.
|Holder of parental responsibility||One or more Guardian Agent’s that can act as the ‘holder of parental responsibility’ for a Child Agent||Responsible for allowing one or many children to access and play the game||Ensure that the children they are responsible for are safe to play and interact with the game.|
|Moderation staff||Role for Game developers and administrators agents to moderate in-game issues||Responsible for abuse reporting and investigation||Ensuring the safety of all users playing the game, and instilling trust from parents that their children are safe playing it.|
|Data controller||Role for Game developers and administrators agents to moderate in-game issues||Responsible for auditing data and privacy policies are being adhered too||Ensure that the game complies with all data and privacy requirements|
Mapping the roles against agents provides an overview of the basic access level controls and authorisation an agent has been assigned and authorised within the system.
|Guardian||Holder of parental responsibility|
|Game developers and administrators||Moderation staff|
|Game developers and administrators||Data controller|
The Secure Tropos social view intends to visualise the relationships between the agents, roles and generated resources that produce a goal within the system.
As part of designing the security mechanisms to mitigate the identified risks, I attempted to documents the risks, should the control fail for that risk and the impact of that control failing.
|Hackers, Eavesdroppers and Bad Actors would be able to intercept communication between children using the service.||Service is not encryption end-to-end||System would not be trust worthy with PII data or allowing children to interact with one another.
Children would be exposed to contact risk.
|Bad Actors could solicit children via other media channels without an existing “real world” relationship||Verification is not based on geolocation||Children would be exposed to contact risk|
|Bad Actors could solicit children via other media channels without an existing “real world” relationship||Device display verification codes not required||Children would be exposed to contact risk|
|Bad actors could contact and ‘follow’ and interact with a children without the child or ‘Holders of parental responsibility’ knowledge or consent||Bidirectional friendship||Children would be exposed to contact risk and content risk (e.g. bullying)|
|Child will be able to ‘friend’ others, including bad actors without the holders of parental responsibility knowledge or oversight||Holders of parental responsibility not required to approve friendship||Children could be exposed to contact risk|
To implement the application to support the user journey I scoped and modelled the interaction between the iOS application and the Web Service API.
For the iOS application prototype to evaluate the security mechanisms, an interface was designed to allow multiple roles through each user journey. The screens within the prototype allow you to:
This multi-role approach within the prototype application allowed me to test the user journey and security mechanisms without making multiple, uncontrolled application modifications that required additional recompilation and deployment of the application.
In modelling the required web service endpoints to achieve the user journey, I was able to describe and created a simple REST based micro-service to provide the application with the necessary data store to support the user journey.
I chose to follow Apple's standard architectural pattern for developing the prototype using the Model-View-Controller (MVC) pattern, rather than architectural patterns such as Model-View-Presenter (MVP) or Model-View-ViewModel (MVVM) patterns. The MVC pattern is already built into the development approach XCode presents, generates and orders Controllers and Views.
While long term unit testing and maintainability of an iOS application written using the MVC pattern maybe difficult, these factors and decisions are outside of the scope of the research objectives and do not affect the prototypes objective within the research.
Development for iOS in XCode provides two programming language options for development, Swift or Objective-C. I choose to develop using Swift as I find the code easier to read and maintain, and the language provides implicit namespaces. Therefore the applications classes are designed to follow the S.O.L.I.D design principles. These principals ensure that each class has a single responsibility within the application and were easily extended as I iterated on the development of the application.
I utilised the microservice pattern for the service as this architectural pattern is used to ensure that an API serves a single business utility, capability or feature. AWS was used as it provides services that allowed the mapping of API endpoints to single Lambda functions easily allowing me to develop the microservice. This simplified the development of the endpoints though single, usable functions and allowed the ability to quickly iterate the code development of a single function without impacting on the entire web service.
A REST API was preferred over a SOAP interface or using a real-time communications protocol system such as XMPP as it allowed for the creation of a simple application design and communication method to illustrate the required endpoints and interactions for the service. The concepts of the REST API are easily transferable to other communications protocols and methods without requiring domain specific knowledge of these protocols or extensions to support the security mechanisms, such as XMPP not natively supporting end-to-end encryption.
Each endpoint has been modelled to describe how a developer would access
The endpoints the iOS application calls
Allow a user to create an invite code to start the friendship user journey.
|Userid||True||Integer||Account id of user who is creating the invitation code|
|Geo||True||Decimal||Latitude and longitude pair of the user's current position|
Returns invite code
Allow user to search and confirm second step of the friendship user journey.
|Userid||True||Integer||Account id of user who is creating the invitation code|
|Geo||True||Decimal||Latitude and longitude pair of the user's current position|
Returns random pin and the inviters public keys for storage on the device.
Verification of the pin presented in the previous user journey step.
Note: these parameters are JSON formatted and encrypted against the inviters public key. Only the intended recipient will be able to successfully decrypt the JSON package.
|Userid||True||Integer||Account id of user who is creating the invitation code|
|Geo||True||Decimal||Latitude and longitude pair of the user's current position|
Allow a holder of parental responsibility the ability to display an activity feed of a child they are responsible for.
|Userid||True||Integer||Account id of a holder of parental responsibility|
Returns full activity list for that user if userid is a holder of parental responsibility
Implementing the prototype I followed the approach outlined in diagram 2.
The Lambda functions require the modules mysql (Wilson, 2016) and haversine (Justice, 2016). The mysql module provides a driver for Node.js to communicate directly with the MySQL instance. The haversine module provides the haversine formula, which calculates the distance between two sets of longitudes and latitudes pairs, as a simple API. This in turn allowed me to compare the distance in kilometres between two actors.
Once the web service was competed and tested, I mapped out the screens required for the prototype application in XCode. These screens allowed me to visualise the view controllers and web service model required to interact with the web service.
As the prototype application is broken into three layers, Lambda functions, API Gateway endpoints and iOS application, it allowed me to test each layer of the prototype individually. As each layer was built and tested, I could continue developing the next knowing that it was working as intended.
As I developed each Lamdba function, I was able to mimic the data required by the function as part of the deployment process. Using this Deploy and Test approach to the functions I was able to pinpoint errors, issues and missing functionality quickly with the functions using consistent test data.
The AWS API Gateway service provides a testing playground for each endpoint that I had created mapped to a specific Lambda function. This playground allowed me to quickly test each endpoint worked as expected. Once the API was published, I was able to retest the endpoints with Postman, a HTTP client for testing web services.
The IIA has published standards for internal auditing which include guidance regarding how controls should be evaluated. Standard 2130.A1 of the IIAs International Standards for the Professional Practice of Internal Auditing (Standards) states:
2130.A1- The internal audit activity must evaluate the adequacy and effectiveness of controls in responding to risks within the organization's governance, operations, and information systems regarding the:
- Reliability and integrity of financial and operational information;
- Effectiveness and efficiency of operations and programs;
- Safeguarding of assets; and
- Compliance with laws, regulations, policies, procedures, and contracts.
I applied the principles from the IIA Standards and identified the following criteria against which I analyses the proposed security mechanisms:
Under the criteria of feasibility, I considered:
i) How technically challenging it would be for social media platforms to implement the security mechanism both in terms of designing the security mechanism and operating it on an ongoing basis. This included factors such as the infrastructure and software required to develop and implement the security mechanism. I considered the amount of time and supervision required to develop and maintain the code and the amount of additional customer data that would need to be captured and managed.
I also considered how feasible the security mechanism would be for a holder of parental responsibility to operate, for security mechanisms where the holder of parental responsibility has an active role in the security mechanisms operation. This included factors such as the level of technical knowledge required by the holder of parental responsibility to operate the security mechanism, the time a holder of parental responsibility would spend operating the security mechanism, and the infrastructure the parent would need to operate the security mechanism.
Finally, for security mechanisms where the child plays an active part in the security mechanism operation, I considered the practicability of the security mechanism. In these instances, I considered the ability needed to operate the security mechanism, for example the level of concentration necessary, ability to read instructions, recognise numbers and input numbers. I also considered the devices needed for the child to operate the security mechanism.
While feasibility and effectiveness were criteria that I identified early on in my work, security mechanism auditability, or the creation of evidence to support and verify the operation of each security mechanism, became apparent to me as necessary criteria once I had developed and implemented the prototype application. Once I developed the application, I realised that organisations would need to be able to demonstrate that security mechanisms worked in order to support the monitoring of risks and security mechanisms and also to support assurance to stakeholders including their senior management, regulators and customers. Without evidence of the security mechanisms implementation and effectiveness, how would anyone know that the security mechanism existed or was needed?
As I evaluated each security mechanism, I examined what evidence was created by the operation of the security mechanism and how could the evidence be collected and maintained. I defined the auditability of the security mechanisms as how the design of the security mechanism create the capacity for an independent party to review evidence which demonstrated that the security mechanism existed, was operational and how the security mechanism had performed when used.
As I evaluated each security mechanism, I considered what existing regulation would impact on the operation of the security mechanisms and whether additional regulation would be required to support the introduction of the security mechanism.
The EU General Data Protection Regulation (GDPR) primary goal is protect EU citizens personal data and the movement and processing of that data.
With regards to children, the regulation expects services to implement a technical mechanism, based on an age-based model, to collect and record the consent of the holders of parental responsibility for a child to participate with the service.
Services must allow both the holders of parental responsibility and children to close any account they possess on the service, any data held on the child to be erased and no longer processed by the service.
For services to produce a clear and transparent policies that both holders of parental responsibility and children can understand with regards to how their data will be processed.
For instance, a security mechanism requiring proximity between users based on geolocation would require social media platform or service to collect data regarding the location of children which would mean that data protection and data security regulation would need to be considered.
When considering the effectiveness of security mechanisms, I first considered the risks that the security mechanisms are designed to mitigate. The three key risks are:
Content risk is out of scope as this research question does not directly relate to authenticating the content to which children are exposed.
For each proposed security mechanism, I decided to evaluate the security mechanism in terms of its effectiveness in mitigating contact risk and conduct risk. The evaluation would considered whether, in theory, introducing such a security mechanism would minimise the risk or whether the risk would still remain despite the security mechanism being in place.
I also decided to examine the implementation and operation of the security mechanisms through the use of a prototype application. As described in Section 3, I developed a prototype application, which implemented each proposed security mechanism. This facilitated me in stepping through the user journey and understanding how the security mechanisms could operate to reduce risks in practice.
For each proposed security mechanism I considered the feasibility, auditability, any regulatory considerations and the effectiveness. I then summarised these findings as an overall result.
It is feasible to design and deploy a complete end-to-end encryption system. The implementation is challenging due to the complex mathematical nature of crypto-secure systems and operating them securely on an ongoing basis. Managing, storing, operating and rotating cryptographic keys used for SSL negotiation, application usage and specific account and device communication is a complicated process. It requires careful change management and consistent monitoring as new security vulnerabilities are discovered and disclosed. Social media platforms would need to hire specialist security researchers, developers and analyst to ensure their platform stays secure.
While a design might be considered secure at the outset of a project, as development and deployment happen often small, unseen changes can occur. Should the level of complexity within the system increase, these changes can cause unforeseen security issues. Given the ongoing improvements in computing power, methods of breaking cryptography and rising complexity of information security, this security mechanisms baseline is a continually moving benchmark.
For the prototype application all communication with the web service is only over SSL. The database the web service utilises encryption at rest.
End-to-end encryption should be unseen and unnoticeable by both holders of parental responsibility and child using the service and would be available on a mobile device.
Independent security audits could be conducted to ensure that communication is encrypted during transport and at rest. Security tests could include intercepting packets following in and out of the application and testing that they are encrypted. The completed security audits could be published alongside code audits. The security design could be documented and published, allow independent security researchers the opportunity for review and feedback.
Many jurisdictions around the world, such as the United States of America and United Kingdom, limit the use, import and export of cryptography in the interests of national security. Consideration must also be given to the hosting of servers for the service in a jurisdiction that is sanctioned for cryptography export.
USA law-enforcement agencies have also asserted that services that provide encrypted message and storage systems should be able to be intercepted and decrypted by the service and passed themselves. For example the Government of the USA asked a court to order Apple to create a unique version of iOS that would bypass security protections on the iPhone Lock screen (Apple, 2016a).
End-to-end encryption does not fully mitigate contact risk for children, it does however limit an eavesdroppers ability to launch man-in-the-middle attacks that enable them to read and possible alter participants communications between each other. This security mechanism proved highly effective in my own attempts in staging a man-in-the-middle attack using the prototype application.
Should hackers be able to obtain the PII data of children and holders of parental responsibility, without encrypting this data at rest, it would be readable by them.
This security mechanism has no impact on conduct risk as it does not prevent children from trying to befriend unsuitable persons.
The security mechanism is feasible, but expensive to design and maintain. The security mechanism is straightforward to audit however there are some regulatory considerations for exporting encryption. This is a highly effective security mechanism, but needs to be part of a system of security mechanisms to fully mitigate contact risk.
GPS enabled mobile devices are able to ascertain their geolocation both through native code and through the HTML5 Geolocation API. The use of geolocation is well supported across devices and is well documented for developers to implement making the security mechanism cheap, in terms of necessary resources, tol implement.
Children using the service with a standard mobile device would be prompted to allow the game or application access to provide and report on their location to the service. This security mechanism has no impact on or involvement by holders of parental responsibility.
At each step of the user journey, the service records the latitude and longitude of the users who are becoming friends. Logs can be created to store these interactions and then audited to ensure that friendship requests can only be progressed when both parties are within a certain radius of one another. These logs would provide evidence of the security mechanism in action.
Guidance on location data is currently being issued by Data Commissioners, such as the Irish Data Commissioner, for both individuals (Commissioner, 2016a) and organisations (Commissioner, 2016b). Given that data on the geolocation of a user could lead to the identification or tracking of that user, this data is personal data and is subject to the EU General Data Protection Regulation (GDPR).
The potential to build profiles of childrens movements by processing the location data is a concern and there is insufficient regulation to limit the risks with processing this data.
The requirement of the verified geolocation throughout the user journey reduces the number of friendships a child could create. This security mechanism limits friendship journey attempts to real word relationships making it a strong security mechanism both in terms of conduct and contact risk.
This security mechanism is feasible, cheap to implement and requires no additional infrastructure. However, the security mechanism is ineffective as a standalone security mechanism and needs to be part of a system of security mechanisms.
It is feasible to design and develop a system that generates a pseudorandom code for display on a childs mobile device and have a second child input and verify that pseudorandom code from the same system. The use of cryptographically secure pseudorandom number generators is well understood and deployed by social media platforms to support services such as forgotten password systems.
Requiring a one-time use verification code, expiry period and limiting the rate at which users can search for the invite code would also be feasible.
In my application, both expiry and rate limiting is handled by the web service. The rate limiting uses the circuit breaker design pattern that applies a threshold of 1 web service call within a sixty second period.
A challenge for the security mechanism is ensuring children are prompted throughout the user journey by text and animations on how to participant in each step of the user journey. The children participating would also need to posses sufficient numeric skills to recognise and input numeric characters on the device.
This security mechanism has no impact on or involvement by holders of parental responsibility.
The service could log verification code acceptance and rejections to produce evidence that the security mechanism requires the correct codes to be entered to progress towards friendship.
Auditors could use a Monte Carlo simulation to determine the probability distribution of the cryptographically secure pseudorandom number generator. This simulation could be drawn from existing records or producing new random numbers. Such an approach would provide evidence that the codes are random.
No regulatory considerations are required for this security mechanism.
Both children have to participate in the generation and input of the verification codes. Should an eavesdropper overhear the code from the first child, the second child would be unable to continue the friendship journey. For example
Expiry of the verification and rate limiting of searches mitigates long-lived invites that can be searched anonymously by bad actors.
Additionally, using a 6-digit pin a bad actor has a 1 in 1,000,000 chance of guessing an invite code from the first step of the user journey. As a child would also need to guess the second pin code, the chances of anonymous, unsolicited contact risk becoming a friendship are 0.000000001%.
This is a robust security mechanism, as a child can only become friends with someone who they know reducing the risk of contact risk. However, whom a child wishes to be friends with and who is appropriate can differ.
The security mechanism is feasible and cheap to develop. It can be audited and has no regulatory barriers. While this is an effective security mechanism, it does not fully mitigate contact and conduct risks when operated in isolation.
Bidirectional friendships are feasible to implement as demonstrated by social media platforms such as Facebook and LinkedIn (Wu, 2012a) who already implement this friendship model on their platforms.
This security mechanism is dependent on a child understanding they have to respond to a friendship request and that it will only be complete once both children have completed the friendship journey.
To implement this within the prototype web service I created a table in SQL that held both relationship data (user a and user b) and the auditable data of how the friendship journey progressed.
This security mechanism has no impact on or involvement by holders of parental responsibility.
The service could log when users became friends capturing the date and time of the first friendship request and then the subsequent date and time of the confirmation of that request. Auditors could test that interactions permitted once friendship is established have only happened after the friendship request has been confirmed and that once users are no longer friends that interactions are no longer feasible.
No regulatory considerations are required for this security mechanism.
By requiring bidirectional friendships, rather than a unidirectional friendship, the security mechanism limits contact risk by a bad actor, as the child has to consent to the friendship request. This removes the risk of a bad actor following a user without their knowledge or consent.
The security mechanism is feasible and cheap to implement. It can be audited and has no regulatory barriers. This is robust as part of a group of security mechanisms but ineffective alone.
It is feasible for a social media platform to develop, as part of their privacy model, the ability for a user to defer the acceptance of a social connection to other an authoritative user, such as a holder of parental responsibility.
Holders of parental responsibility would need to be informed of the friendship request via email, application notification or text message and prompted to take action. Holders would be expected to approve or reject the friendship request.
Children would need to be informed that their friendship requires confirmation from both childrens holders of parental responsibility.
However, the challenge for this security mechanism is the ability to verify that a person is authorised to be a holder of parental responsibility for a child. While outside of the scope of this research, the ability to accurately and reliably validate and verify holder of parental responsibility and child accounts is required for this security mechanism to be feasible.
The social media platform could log the time and date that each holder of parental responsibility was informed of the friendship request, in addition to the acceptance or rejection of the request.
Auditors could test that the children only became friends once both childrens holders of parental responsibility approved the friendship.
The EU General Data Protection Regulation (GDPR) does expect services to implement a technical mechanism to collect and record the consent of the holders of parental responsibility for a child to participation with a service. This regulation does not require a level of granularity that this security mechanism proposes.
By requiring holders of parental responsibility to review and approve friendships, this security mechanism mitigates both contact and conduct risk.
The security mechanism is feasible and cheap to implement. It can be audited and builds further upon the GDPR regulatory requirement of a technical mechanism to collect and record the consent of the holders of parental responsibility for a child to participating within the service for a specific feature.
A challenge to the this security mechanism, as highlighted by the literature review, is that the majority of parents feel they do not know enough to help their child manage online risks.
This security mechanism is robust as part of a group of security mechanisms but not fully effective alone as approved friendships could be compromised if end-to-end encryption is not in place.
As I evaluated each security mechanism individually my results showed that each security mechanism was ineffective alone and could only operate effectively as a group to mitigate the risks.
Taking a multivariate approach I grouped the security mechanisms together with the objective to consider combinations of security mechanisms that would mitigate the risks. I began by switching off one security mechanism as showed in table below.
|E2EE||Geolocation verification||Verification codes||Bidirectional Friendship||Friendship approval by holders of parental responsibility|
|Without Geolocation verification||✔||✘||✔||✔||✔|
|Without Verification codes||✔||✔||✘||✔||✔|
|Without Bidirectional friendship||✔||✔||✔||✘||✔|
|Without friendship approval by holders of parental responsibility||✔||✔||✔||✔||✘|
Without the security mechanism of end-to-end encryption children are vulnerable to eavesdroppers launching man-in-the-middle attacks. Hackers would also be able to obtain the PII data of children and holders of parental responsibility.Without this security mechanism, the group of security mechanisms are ineffective.
Without the geolocation verification security mechanism, each step an actor takes in the friendship journey cannot be verified independently by the service to ensure that both are physically located close to each other. Should the actors not be physically close to each other, passing of the codes and pins requires some form of existing communication between the actors creating the relationship. Therefore both actors would need to be able to communicate with each other within the time frame to create a friendship.
Even without this security mechanism being in place, the holders of parental responsibility are still required to approve the friendship request.
Without this security mechanism, the group of security mechanisms are effective.
Without the security mechanism of device display verification codes, children would still be required to be in close proximity of each to become friends due to the geolocation verification security mechanism. However, geolocation can be spoofed and would allow a bad actor the ability to give the service false information that they are in a specific location.
Even without this security mechanism being in place, holders of parental responsibility are still required to approve the friendship request.
Without this security mechanism, the group of security mechanisms are effective.
Without the bi-directional friendship security mechanism, a unidirectional model would be deployed. This approach would modify the proposed user journey and require a child or a holder of parental responsibility to consent to the friendship request before any interaction can take place.Even without this security mechanism being in place, holders of parental responsibility are still required to approve the friendship request.
Without this security mechanism, the group of security mechanisms are effective with modification to the user journey.
Without the approval of friendships by holders of parental responsibility a child could be coerced or incentivised by a bad actor into creating a friendship within the system.
Without this security mechanism, the group of security mechanisms are ineffective.
Two security mechanisms, end-to-end encryption and approval by holders of parental responsibility of new friendships, are essential secure mechanisms to authenticate and verify social connections for mobile games and applications aimed at children under 13.
While the other security mechanisms are not essential, they do offer additional mitigation of the risks. Particularly through requiring physical and technical verification and validation of both actors being in proximity to one another through the geolocation verification and device display verification codes security mechanisms.
To date, no research has brought together risk management techniques with childrens digital media services. Prior research has not touched upon the topic of secure mechanisms that allow users under-13 to authenticate and verify social connections. Practitioners attempting to implement similar mechanisms and security mechanisms do not have an established approach or off the shelf risk management framework to assist in the consideration of security and privacy for users of their services and systems.
Therefore, this research provides an approach to risk management and suite of security mechanisms to begin filling in these knowledge gaps.
At the outset of the project I understood that the research I was undertaking was complex, would require a multidisciplinary approach and had limited sources of existing academic research related to the topic. I felt that by undertaking this research and exploring how a child could have a safe and secure social networking account, industry practitioners could build upon this research and put it into practice.
I met this research aim by examining the security mechanisms in place across mobile games and applications aimed at children under 13 to authenticate and verify social connections. I used my literature review to inform my approach to identifying risks, developing security measures and selecting criteria to evaluate security measures against.
When I realised that existing security mechanisms were deficient in mitigating risks, I considered the underlying cause. I identified that IT risk frameworks were not prompting organisations to explicitly consider risks to users. I then selected a suitable risk and control framework, COSO, and extended it to include risks to users. I used the framework to identify and assess risk and then designed potential security mechanisms to authenticate and verify social connections for mobile games and applications aimed at children under 13. I evaluated these proposed mechanisms using a criteria-based guideline and implemented the proposed security mechanisms in a prototype application.
I met this objective by producing an evaluation guideline for security mechanisms focusing on criteria for design adequacy (encompassing feasibility, auditability, and regulatory considerations) and operational effectiveness. By evaluating security mechanisms in terms of both design and effectiveness, I was able to consider whether the security mechanisms could be implemented in practice and whether the security mechanisms would be effective if implemented. I based my security mechanism evaluation approach on IIA Standards for control evaluation.
I met this objective by evaluating the proposed security mechanisms using the criteria-based guideline. Using a consistent set of criteria to evaluate the proposed security mechanisms enabled me to compare the security mechanisms in terms of design practicality and effectiveness and allowed me to consider the security mechanisms as singular security mechanisms but also as groups of security mechanisms. Ultimately this approach allowed me to conclude that it is possible to implement security mechanisms to authenticate and validate social connections and which security mechanisms are essential to this validation.
I fully met this objective by developing an iOS application and associated social graph platform as a means of evaluating the proposed security mechanisms. I used the prototype application to assess the feasibility of implementing the technical security mechanisms and gain insight into potential challenges. I found, however, that as my research progressed the academic benefit of creating the prototype application diminished.
The challenge for this research was not how to implement security mechanism but how to identify risks and develop suitable security mechanisms in the context of this specific research question. The proposed mechanisms are simple and make use of security mechanisms or approaches that already exist in other areas of IT risk management. The five proposed security mechanisms are not excessively technical in nature or overtly difficult to implement. Encryption is an existing feature of many online interactions, parental approval already exists as a security mechanism for creating social media accounts, devices have functionality to transmit geolocation information, bi-directional friendships are a feature of many social media platforms and entering codes to ensure that only authorised users can progress through a process is also an established mechanism.
I expected my research to centre on implementing existing security mechanisms, suggested by my literature review, and developing approaches to resolve any related technical issues. While my research did involve the implementation of proposed security mechanisms in a prototype application, the focus of the research was on how to identify risks and evaluate security mechanisms. The key strength of the findings is not the technical implementation of the proposed security mechanisms, but rather the process by which they were derived and the overall risk mitigation these security mechanisms can provide.
While the aim and objectives of the research have been fully met, there are a number of limitations to the approach and simplifications which must be considered in conjunction with the findings.
A limitation of this study is that the proposed security mechanisms were not tested using a sample of testers similar to the intended users. This would have entailed organising children to use the application to become friends and an organisation implementing the technical security mechanisms. This type of testing would have enabled the assessment of user experience of the proposed security mechanisms to ensure children and parents can successfully navigate the user journey. Such testing would have also provided empirical evidence on the effectiveness of the security mechanisms.
A number of simplifications were used when designing the proposed security mechanisms. For instance, the security mechanisms were designed without consideration of how the security mechanisms could adapt to changes in family circumstances. Situations where it is necessary to remove someone as a holder of parental responsibility for a child would complicate the design of the security mechanism. Additionally, this study focused on preventative security mechanisms rather than including detective security mechanisms that would identify instances where security mechanisms failed and an inappropriate friendship was established.
Another simplification in security mechanism design was the assumption of homogenous risk across users under the age of 13. No difference in security mechanism design was considered for users aged 4 compared with users aged 12. This type of granular model could be considered in future research.
One challenge faced in this study was the lack of data on social media connection outcomes. Such data would have enabled me to perform risk assessments based on statistical methods. I assigned impact and likelihood risk ratings based on my subjective expectations, due to the lack of data. If data had been available from a social media platform regarding connection outcomes, it would have been possible to create a frequency distribution for bad outcomes and a severity distribution for bad outcome impacts. This could then have been used to assess risk based on empirical evidence rather than a subjective assessment.
A related topic, which was not addressed in this research, is how a service could verify a persons identify and age. Without this mechanism, the ability to verify that a person is authorised to be a holder of parental responsibility over a child is more difficult to resolve. Without correctly identifying under-13s and their holders of parent authority, social media platforms are limited in the protection they can provide to children.
The literature review concluded that exclusion is the most prevalent security mechanism for protecting under-13s on social media. The terms and conditions of social media organisations such as Facebook, Instagram and Twitter do not allow under-13s to participate in their services (Facebook, 2015, Instagram, 2016, Twitter, 2016). However, this security mechanism is inadequate as evidence suggests that more than three-quarters of children aged 10 to 12 in the UK have social media accounts(Coughlan, 2016). Rather than developing appropriate security mechanisms, social media platforms are excluding under-13s from services while turning a blind eye to this demographics actual usage. This is a grave failure on the part of social media platforms to protect their under-13 users.
Why are social media platforms not doing more to introduce appropriate security mechanisms to protect under-13s when creating social connections? Existing risk management frameworks are inadequate, in terms of their breadth of scope, when identifying and managing risks. Risk frameworks, such as COBIT and COSO, focus solely on risks that affect the organisation, not risks faced by users of the systems or services. In failing to enable organisations to explicitly consider the risks facing their users, risk frameworks are limiting organisations ability to implement appropriate security mechanisms. A key tenant of both COSO and COBIT is that unless risks are properly identified and assessed, suitable security mechanisms cannot be designed.
By expanding the scope of the COSO framework to include risks facing users, this study was able to identify and assess risks for under-13s when creating social connections. Through assessing these risks, a number of security mechanisms were identified and evaluated concluding that it is possible to effectively authenticate and verify social connections within mobile games and applications for under-13s. The security mechanisms are effective in minimising the identified risks children face and are feasible to implement by practitioners wishing to develop a friendship service within their game or application.
While five security mechanisms were identified and assessed, the study concluded that none of these security mechanisms fully addressed the risks by themselves. However, two of the security mechanisms, end-to-end encryption and friendship approval by holders of parental responsibility, when implemented together effectively mitigated the risks.
The holders of parental responsibility security mechanism requires parents to be knowledgeable and confident in making decisions regarding the approval of friendships. Based on the literature review, the majority of parents feel they do not know enough to help their child manage online risk this strongly indicates that more support, education and tools are required for parents to adequately oversee childrens online activities.
The other three security mechanisms assessed, geolocation verification, device display verification codes and bidirectional friendship security mechanisms, are complementary security mechanisms aimed at limiting the number of entities a child can befriend. Until parents are more comfortable managing online activities, it is advisable for all five security mechanisms to be implemented.
Who is responsibility for ensuring friendships are appropriate? Is it parents, organisations that offer social media services or children themselves? The five security mechanisms recommended are across all three sets of stakeholders and need to work in tandem as a shared responsibility model to mitigate the risks. This shared model relies on the service providing an adequate and secure approach to friending, parents providing parental oversight by approving friendships and children to be taught who it is appropriate for them to befriend. While it is not possible to eliminate all risk, such a model should provide a means to mitigate the risks as much as possible.
The organisations who would be responsible for implementing these security mechanisms are not the people the security mechanisms are designed to protect. Introducing additional security mechanisms is likely to incur additional costs for organisations and may affect the user experience. Careful consideration will be needed to address reluctance by organisations to implementing additional security mechanisms. Regulatory devices such as inspections, sanctions and fines coupled with reputational risk may be the levers that encourage organisations to introduce such security and privacy mechanisms.
In summary, appropriate security mechanisms are not currently in place to authenticate and verify secure social connections for mobile games and applications aimed at children aged under 13. It is possible to develop and implement security mechanisms that would authenticate and validate such social connections. However, support is required for organisations, parents and children through the development of appropriate risk frameworks, parental education and the introduction of additional security mechanisms.
A clear recommendation from this study is that mobile game and application developers offering friendship features within their games and applications should enhance their security mechanisms to adequately protect under-13s. Such organisations should consider the user journey, identify the inherent risks to users and implement security mechanisms, such as the ones evaluated by this study, to improve the security and privacy measures for all their users. Excluding under-13s users in terms and conditions but allowing them to use the services in practice without providing sufficient security mechanisms is unacceptable.
A further recommendation is that IT risk frameworks be further expanded to include risks to stakeholders. This would allow organisations to identify and consider risks along the full user journey, rather than just the risks posed to their own organisations. This expanded framework would support the introduction of more comprehensive security and privacy mechanisms when authenticating social connections for children.
Another recommendation is for research to be undertaken regarding the appropriate collection, processing and retention of geolocation data, particularly that of vulnerable groups such as children. Researchers should aim to provide a framework to ensure robust security and privacy practices while adhering to the EU General Data Protection Regulation (GDPR).
As per the results in Section 4, the friendship approval by holders of parental responsibility security mechanism is a key security mechanism and requires holders of parental responsibility to be knowledgeable and confident in making decisions. Without this parental oversight online risks are not mitigated. Further research is warranted on how best to improve parental knowledge of online risks and security mechanisms.
A further study could sample a population of social media platform users under the age of 13 to test the security mechanisms implemented in the prototype application. The study could examine:
Finally, further studies or industry practitioners should examine how data regarding the user journey can be collected including instances of bad outcomes (i.e. where inappropriate friendships are established) and impacts of bad outcomes. This data would allow organisations to confirm security mechanisms are effective, and potentially identify additional risks and security mechanisms. In addition, this data could be used to assess risk based on statistical methods. If data were available from a social media platform regarding connection outcomes, it would be possible to create a frequency distribution for bad outcomes and a severity distribution for bad outcome impacts. These distributions could be used to quantify the risk based on empirical evidence.