Ethics
A short Recap of the scope & ethics about the research.
Q1a. What is the aim of your research?
Research area: Simple Yet Effective AI Risk Assessment Technique, Building Assurance When Using AI Systems within the UK hospitality sector in The Intelligence Age (Altman, 2023)
Main Research question: Considering the risk of social polarazation due to adverserial AI Risk and opportunities for novel AIntrepreneurs™, In alignment with essential requirments of ISO 42001 for AI systems devlopment & use, how can techniques for AI Risk Assessment be developed to help organizations responsabily perform their role simply & effectively?
Sub Questions:
1. What are the artificial intelligence risks and opportunities for AIntrepreneurs™ ?
2. What parts of ISO 42001 are essential for the devlopment & use of AI systems to manage an AIntrepreneurs™ ecosystem safely & resiliently?
3. What technique can organizations utilize to self-assess AIntrepreneur™ risks in line with ISO 42001?
BASIC DETAILS |
Student name Peter Joyce |
Student number UP2093049 |
Email address Peter.Joyce@myport.ac.uk |
Supervisor name and email Phil Crook Phil.Crook@port.ac.uk |
Dissertation/Project Tittle:
UK AI Entrepreneurs & Responsible AI Use : Believe, Understand & Make It Happen
In Search Of UK Entrepreneur AI Responsible Use Why, How and What, Aligned To Essential Requirements Of ISO/IEC 42001:2023.
AIM
UK Entrpreneurs are busy people primarily focussed on using their own capital in search of profits. AI comes with many risks which may or may not be yet be known. It also comes with potentially huge benefits and opportunities and therefore use of AI may augment current work and bring great rewards. But WHY should entreprenurs use AI responsibly in line with recently published guidance?
The worlds first AI Management System standard offers guidance on HOW AI should be developed and used responsibly. It was published in 2023. However the ISO standard guidance for risk assessment techniques was published in 2019. There is therefore a gap on WHAT needs to be done to self assess against the responsible controls proposed in ISO/IEC 42001 for UK AI Entrepreneurs and HOW this should be done in a standard and effective way. As these controls relate to both development and use they are not all required for consideration by busy UK AI Entrepreneurs. They simply want to use AI systems for competitive advantage so they need to know what is important and what is just wasting their time to consider?
The aim of this research is to fill this gap to help emerging UK AI Entrepreneurs BELIEVE in the power of AI whilst controlling relevant risks, UNDERSTAND what are the essential contols to use in order to use AI responsibly and then discover the requirements of entrepreneurs – who are time poor – for a simple yet effective AI Vulnerability Self Assessment tool to MAKE IT – “Responsible AI Use” – HAPPEN.
OBJECTIVES
The objectives are as follows :
BELIEVE
- OBJ1. Examine the differing perspectives of AI Entrepreneurial risk threat and opportunities relating to the support of entrpreneurs adopting augmented use of AI tools.
- OBJ2. Determine attraction for AI adoption considerations and motivations of UK AI entrpreneurs.
UNDERSTAND
- OBJ3. Determine the essential requirements for responsible effective use of AI systems by AI Entrepreneurs in line with essential ISO 42001 controls.
MAKE IT HAPPEN
- OBJ4. Determine UK AI Entrepreneur requirements for a novel technique for actionable AI vulnerability self assessment in line with ISO 42001 in a simple, effective, fun and engaging way.
How will the primary data contribute to the objectives of the dissertation / research project?
Articial Intelligence (AI) is a relatively novel industry in the eyes of wider society since the launch of OpenAIs ChatGPT in 2022 to over xxx million subscribers in xxx days.
However for industry experts they know the catalyst for the industry goes back to its formal inception at the xxx conference in the 50s.
As such there are many varied views on the risks and opportunities associated with AI (objective 1), only a handful of standards experts have been involved in creating the ISO 42001 AI Management system guidance (objective 2) and the Novel AI risk assessment technique guidance is yet to be upgraded in ISO 31010 and discovered to assess against the guided recommendations of ISO42001.
The ISO guidance follows a rigorous and arduous standards development process involving industry experts from all over the world in a top down approach. But an important questions stands as to how effective (doing the right thing) this guidance is having been built from the top down.
Therefore the primary data has to now attempt seek out a balanced perspective of a cross section of society outside the group thing risk or an ISO global conference development room.
AI ecosystems are all around us and the advent of the 2025 emergence of Agentic AI now sees AI Agents empowered to act on behalf of stakeholders across all areas of the internet opening up not only girth threat risk but huge performance opportunity.
The cross section of societal populations to populations will be :
Government Agencies
ISO Risk and Resilience Standards
Objective 1. To understand the risks of societal polarisation of job displacement through impact’s of adversarial AI & opportunities for AIntrepreneurs™.
Primary Data : Highlihght concerns and Proposed Requirements of entrepreneurs in society wishing to simply and effectively assess the risks of developing and using AI systems, attempting to augment their current entrepreneur capabilities towards those required to become AIntrepreneurs.
Objective 2. To identify the essential requirements for responsible use or development of AIntrepreneurs™ in line with ISO 42001.
Primary Data : Seek consensus as to what the “essential requirements for responsible use or development” of an AIntrepreneurs ecosystem should be to help discover the 20% of ISO 42001 guidance which is agreed by all engaged to offer up 80% of value within management system.
Objective 3. – To discover a novel technique for AI risk assessment addressing the essential requirements identified for responsible use or development of AIntrepreneurs™ in line with ISO 42001 in a simple, effective and engaging way.
Primary Data : Discover opinions from the bottom up on what principles and framework should be developed to meet the requirement for a simple yet not simplistic AI assessment technique generically designed for the needs of all.
Q1b. What are the objectives of your project?
Objectives Are
1.
To understand the risks of AI job displacement & opportunities for AIntrepreneurs™.
2.
To identify the essential requirements for responsible use or development of AIntrepreneurs™ in line with ISO 42001.
3.
To discover a novel technique for AI risk assessment addressing the essential requirements identified for responsible use or development of AIntrepreneurs™ in line with ISO 42001 in a simple, effective and engaging way.
Q2. Have you read the Research Ethics Guidance on the module Moodle site for this unit?
YES.
Q3. What data sources do you intend to use in your project?
WEF report
On global Risks
Industry influencers highlighting risks
such as Mustafa Suleyman, Jeffrey Hington, Elon Musk, Sam Altman, Demis Hassabis, Dario Amodel, Yoshua Benjio & Nick Bostrom warning of AI threats.
Influencers highlighting opportunities
such as Fei-Fei Li, Andrew Eng, Yann Lecun, Demis Hassabis & Cassie Kozirkov.
Reports referring to risks
- The WEF insights (The cyberthreat to watch in 2025 and other cybersecurity news)
- Kela (2025 AI threat report); How cybercriminals are weaponizing technology
- Gov.uk (International AI safety report)
- Digital AI (2025 application security threat report)
- OpenAI (Disrupting malicious uses of our models); Update 2025
Reports referring to opportunities
- S&P Global (AI and society); Implications for global equality and quality of life
- PWC predictions 2025 (2025 AI Business predictions)
- TECH UK (Guest blog Sof prodigy); AI and society – A case study on positive social change
- Mckinsey Research 2025 (Superagency in the workplace); Empowering people to unlock AI full potential
- OECD (Artificial intelligence in society)
Standards
ISO 42001 AI management system; ISO 23894 AI risk management; ISO 31000 Risk management; ISO 31010 Assessment techniques; ISO 27001 Information security
Historic government security models
VSAT (vulnerability self-assessment tool)
Q4. Will your research involve collecting information or objects (directly /indirectly related to) from living human participants?
YES.
Q5. Do you intend to collect personal or confidential data about living individuals?
NO.
Q6. Do you intend to collect data that entails a security risk?
NO.
Q7. Does the research entail any conflict of interest?
NO.
Ethics questions
Dissertation Focus
How can AI risk assessment techniques support responsible use amid adversarial threats and AIntrepreneur™ opportunities, aligned with ISO 42001?
2. Does the research involve any of the following organisations?
NO.
3. Do you intend to collect primary data from human subjects or data that are identifiable with individuals?
YES.
4. How will the primary data contribute to the objectives of the dissertation / research project?
1. To discover risks & opportunities of AI for AIntrepreneurs™.
2. Agree on essential guidance from ISO 42001.
3. To discover a simple yet effective framework for assessing risks of AI management systems aligned with ISO 42001 for AIntrepreneurs™.
Primary data will contribute to the research by giving the societal real world perspectives. There are so many varying opinions perspectives and biases on display its essential to seek wide and diverse opinions of as many humans who will ultimately have the opportunity to become an AIntrepreneur™.
5. What is/are the population(s) you are researching?
POPULATION 1
Risk, Crisis & Resilience Standards Experts
Representatives of International Related Standards
POPULATION 2
Entrepreneurial organizations
1. Large organization: Arthouse Hotel CEO management & guests
2. Medium organization: CYS security CEO & management
3. Small organization: Singer/Songwriter Sol Traders
POPULATION 3
Risk Informed Citizens
LinkedIn group with over 3,000 members invited to respond to questionnaire. First 100 returns used in study.
6a. How big is the sample for each of the research populations?
POPULATION 1 : 5 – 10 Experts.
POPULATION 2 : Entrepreneur in each company will act as gatekeeper to invite staff members to be interviewed. (Leader plus up to 3 employees in Large and Medium companies).
POPULATION 3 : Up to 100 responders will be sought from Risk, Crisis & Resilience related LinkedIN groups.
6b. How was this sample arrived at?
Selection criteria
7. How will respondents be identified?
Known ISO experts \ Random would be entrepreneurs no bias
8. How will respondents be recruited?
Linked In & email and gatekeepers at hotel and creators hq
9. What steps are proposed to ensure that the requirements of informed consent will be met for those taking part in the research?
Reference informed consent
10. How will data be collected from each of the sample groups?
Surveys & questionnaire
11. How will data be stored?
University cloud storage to be defined
12. What measures will be taken to prevent unauthorised persons gaining access to the data, and especially to data that may be attributed to identifiable individuals?
Identify the security policy of the university.
13. What steps are proposed to safeguard the anonymity of the respondents?
Reference suggestions in Saunders
14. Are there any risks (physical or other, including reputational) to respondents that may result from taking part in this research?
NO.
15. Will any data be obtained from a company or other organisation?
NO.
16a. What steps are proposed to ensure, informed consent will be gained for any organisation in which data will be gathered?
Ref info consent process.
16b. How will confidentiality be assured for the organisation?
Ref confidentiality process.
17. Will the proposed research involve any of the following?
Potentially vulnerable groups (e.g. adults unable to consent, children)?
YES
NO
Particularly sensitive topics?
YES
NO
Access to respondents via ‘gatekeepers’?
YES
NO
Use of deception?
YES
NO
Access to confidential personal data (names, addresses, etc)?
YES
NO
Psychological stress, anxiety, etc.?
YES
NO
Intrusive interventions?
YES
NO
Gatekeepers will not share any details or personnel details, everyone will be anonymous and protected.
18. Are there any other ethical issues that may arise from the proposed research?
NO.