7BSP1266 cyber security issues using AI in smart phone Assignment Sample
Module code and Title : 7BSP1266 cyber security issues using AI in smart phone Assignment Sample
Our personal and professional lives have been significantly disrupted as a consequence of the Covid-19 epidemic, which has affected both of us. We are not the same individuals we used to be. According to experts, since the virus’s first appearance in Wuhan, China, in 2020, the virus has spread at an alarming pace around the world at an alarming rate. (Mosteanu2020)
International organisations and scientists are increasingly reliant on new technology, like as artificial intelligence, in order to monitor the epidemic, anticipate where the virus will arrive, and plan a realistic response strategy (AI) (Mosteanu,2020).
The Main aim of this research is to analyse and explore the security and risk issues due to AI in smartphones
- To analyse the risk issues due to AI in Smart phones
- To analyse security threats due to AI in phones
- To understand how AI is creating security threat for data protection in phones
- What are the risk issues due to AI in Smart phones?
- What are the security threats due to AI in phones?
- how AI is creating security threat for data protection in phones?
Other than examining and identifying medications or therapies that may be beneficial in treating Covid-19 via the use of artificial intelligence, several research and development institutions are generating prototype vaccines as a jumping off point for future research and development. In addition, artificial intelligence has been used to recognise visual signals on lung scan pictures that indicate the presence of new coronaviruses, such as the H5N1 virus, on the images.
Additionally, artificial intelligence has been utilised to recognise Covid-19 visual signals on lung scan pictures, which may be used to detect the presence of new coronaviruses, such as the H5N1 virus, on lung scan images. Researchers have discovered that they may be able to follow the advancement of the disease by using wearable sensors to detect changes in body temperature, as well as open-source data platforms to track the growth of the sickness.
With the use of their AlphaFold artificial intelligence engine, which was built in-house, DeepMind was the first business to predict and publish protein structures related with the coronavirus during the early stages of the pandemic. 9 The vaccines from Pfizer, Moderna, and AstraZeneca were approved at a period when artificial intelligence and other cutting-edge technology were being used to handle the huge process of approving them.
The Medicines and Healthcare Products Regulatory Agency (MHRA) of the United Kingdom is collaborating with the UK arm of Genpact’s international digital transformation professional services organisation to use artificial intelligence to monitor vaccinations’ potential adverse effects on different demographic groups, for example.
Genpact’s international digital transformation professional services organisation is collaborating with Genpact’s UK arm to use artificial intelligence to monitor vaccinations’ potential adverse effects on different demographic groups, for example. Among other things, Genpact’s worldwide digital transformation professional services organisation is cooperating with Genpact’s UK arm to employ artificial intelligence to monitor the possible bad effects of vaccines on various demographic groups, such as children and the elderly.
Other than medical applications, artificial intelligence has been discovered in a range of other sectors as well. When you search on Facebook for dramatic or alarming terms, as well as when you look for reputable web references, you may help in the battle against false information and misinformation, which has been made possible by the use of social media.
Those using smartphones to access their social media accounts are being monitored by software that use artificial intelligence to identify and prohibit them from publishing on social media sites. All of these are instances of artificial intelligence capabilities that governments throughout the globe are putting to use to aid with the execution of lockdown protocols in their own countries.
The conflict with Covid-19, on the other hand, has shown some of the fundamental limitations of artificial intelligence technology (AI). Data-driven technologies, such as those that are already available, have the ability to learn from their experiences. Using high-quality inputs that precisely characterise the projected behaviour of systems when training systems to attain the required performance levels is crucial to achieving the desired performance levels. The problem is that, despite the fact that this method has been shown to be successful in artificial intelligence (AI) systems that work in pre-defined conditions and settings, it has been demonstrated to be far less predictable when applied to real-world scenarios.
Artificial intelligence (AI) has its limits, according to the conclusions of a study conducted in the banking business. According to historical records, the stock market had its most difficult month on record during the month of March 2020. Millions of dollars in market value were lost as a consequence of the epidemic, so it is not unexpected that this is the case in this scenario as well. While the vast majority of hedge funds were using artificial intelligence to decide the structure of their portfolios, the market shock had an influence on dollar-neutral quant trading methods as well as traditional quantitative trading procedures (those that held equal long and short positions) (Mendhurwar,2021).
The adoption of artificial intelligence models that were much too complex for quant funds to comprehend may have caused the most substantial damage. One of the primary reasons for artificial intelligence’s poor track record is the technology’s inability to cope with unexpected occurrences such as Covid-19, which has only happened a handful of times in the market up to this point in time (Mozzaquatro,2018) Standard operating procedures (SOP) are used to establish governance norms.
They also serve as criteria for responsible behaviour and enable organisations to demonstrate compliance with industry best practises and regulatory requirements. Management standards are also used to assess the overall performance of an organisation. We don’t yet know how many new standards will be developed in any category to serve as cybersecurity criteria for artificial intelligence systems, or if or how active projects will relate to current cybersecurity standards.
It is possible that the development of artificial intelligence management standards, which establish requirements for organisational governance and system robustness, will be particularly important in light of the threat of cyberattack escalation, the possibility of loss of control, the difficulties in anomaly detection and monitoring, as well as the large attack surface of the digital environment, among other considerations. Artificial intelligence technology and related risks will advance in parallel with the development of attack techniques, as will our understanding of these technologies and threats.
A double-edged sword in the fight against Covid-19, since artificial intelligence is capable of learning from its mistakes, is the possibility that it may prove to be useful in the long run. The use of artificial intelligence may be beneficial in the event of a health crisis that has never been encountered before. Despite the fact that these systems can be relied upon, they do have certain inherent limitations that must be addressed before they can be properly implemented (Culot,2012).
The relationship between artificial intelligence (AI) and cybersecurity (Cybersecurity) is likely to be seen as a metaphor for the interplay between AI and cybersecurity, as has been argued before. Artificial intelligence (AI), in a manner similar to how the plague plagued Europe in the past, has the ability to both empower and destroy cybersecurity. When it comes to the pandemic, artificial intelligence (AI) has struggled to make sense of the issue due to a paucity of high-quality data available.
A wide range of cybersecurity problems are raised by artificial intelligence, which are directly tied to the manner in which it operates and learns. These dangers are compounded by the complexity of the AI system that was used to build the threats in the first place. According to the results of this article, artificial intelligence has the potential to significantly enhance cybersecurity procedures, but it also has the ability to worsen present security problems; however, this is not the case. The results of the study, which will include advice on how to limit the risks connected with it, will be made public, as will the recommendations (Bawack,2021).
Despite the fact that the literature on artificial intelligence (AI) is up to date, there does not seem to be general agreement on what artificial intelligence is or should be. According to this article’s definition of artificial intelligence, the following is true: “artificial intelligence” is a term used to describe artificial intelligence.
As defined by the Organization for Economic Cooperation and Development (OECD), artificial intelligence (AI) is “a machine-based system that can make predictions, suggestions, or judgments impacting real or virtual surroundings for a specific set of human-specified goals.” AI is “a machine-based system that can make predictions, suggestions, or judgments impacting real or virtual surroundings for a specific set of human-specified goals” (AI).
A legal framework for this concept was developed as part of the “Regulation on a European Approach to Artificial Intelligence,” which was issued in 2012 and is now officially in effect.
The purpose of this research is to explore artificial intelligence in all of its manifestations, both symbolic and non-symbolic, in order to better understand it. Synthetic intelligence (sometimes referred to as symbolic artificial intelligence) is a kind of artificial intelligence that makes use of programming languages to define unambiguous rules that are then hard-coded into the system’s code. In non-symbolic artificial intelligence, which is a subset of machine learning, the lack of clearly established rules distinguishes it from its counterpart.
Findings and results
In order to find patterns or generate predictions from huge volumes of data, automated systems must be capable of dealing with ambiguity and incompleteness. This is a very important benefit.
On a consistent basis throughout the book, a heavy emphasis is placed on machine learning, which is the fundamental method that powers today’s artificial intelligence systems (Chung,2021).”
In computing, machine learning techniques refer to a collection of technologies that allow computers to learn automatically via patterns and inferences rather than through explicit instructions from a human. A huge number of examples of accurate outcomes are used in conjunction with machine learning methods in order to train computers how to make an informed choice (Kuzlu,2021). Training the machine via trial and error is also possible provided an initial set of rules is developed prior to the start of the training procedure.”
19 Three of the most common types of machine learning algorithms are “supervised learning,” “unsupervised learning,” and “unsupervised learning with reinforcement.” These are the three basic types of machine learning algorithms that may be discovered. The supervised learning algorithm is the most common kind of machine learning algorithm, and it is also the most extensively utilised type of machine learning algorithm.
The creation of an artificial environment in which an artificial agent can learn to traverse through different states and execute certain behaviours as an alternative to traditional reinforcement learning techniques is being investigated as an alternative to traditional reinforcement learning techniques. The three types of standards that are most often encountered are technical interoperability standards, management standards, and basic standards. In order to build a shared understanding of artificial intelligence policies and practises, foundational standards give universally agreed-upon essential notions that are used to guide the development of such policies and practises.
They may also include specifications for terminology, use cases, and reference architectures, among other things. Standards for technological interoperability give tools, such as protocols, that enable disparate systems to communicate with one another and exchange information. Given the rapid advancement of artificial intelligence technologies, it is possible that interoperability solutions may rely more on open-source partnerships than on standards, at least in the short to mid-term. For the purpose of identifying the standards activity that has an influence on the relationship between artificial intelligence and cybersecurity, the third category – management – is the most important.
It has been shown that incorporating artificial intelligence (AI) into a system’s design may assist in boosting system resilience by increasing the efficacy of security procedures such as vulnerability assessment and scanning. When compared to manual vulnerability assessments, an automated vulnerability assessment may save a substantial amount of money and time on the project. It is possible to save a large amount of money and time by using artificial intelligence to do an entirely automated vulnerability assessment.
Using artificial intelligence to increase system stability is another subject that will be discussed in more depth later on in this section. Code review is another area where artificial intelligence may be applied.
Traditionally, in the area of software engineering, it has been standard practise for one or more peers (reviewers) of the code author to manually review the source code before it is made available for general distribution. It may be possible to reduce the amount of time required for the procedure while also discovering a greater number of problems than would otherwise be possible. This would be accomplished via the use of artificial intelligence systems.
In addition to the apparent advantages of boosting system resilience and strengthening system security, the application of artificial intelligence to enhance system resilience has a strategic advantage in that it lowers the value of zero-day assaults on black markets, which is a strategic advantage. In part, this is due to the fact that their impact on systems is limited, which makes zero-day attacks so attractive on the black market. Due to the fact that the system suppliers are either unaware of the problems or do not have a patch available to remedy them, this is the case.
First and foremost, it is vital to understand what is happening and where it is happening in order to design and execute a successful response strategy. Selecting which vulnerabilities to exploit and where they are located, as well as launching counter-attacks against the targets of the vulnerability, may be necessary in this context.
A competition sponsored by the Department of Defense, the DARPA Cyber Grand Challenge, pitted seven artificial intelligence systems against one another in the year 2014. This mission required them to identify and remedy their own flaws while also exploiting the weaknesses of their rivals, which they accomplished with relative ease and success.
There have been multiple occasions in this research when artificial intelligence has been used for both malicious and good purposes, as shown by the findings. As a result of what we referred to as the “weaponization” of artificial intelligence, the line between what are basically two sides of the same coin would become even more blurred.
There is a widespread belief that different commercially produced and created programmes may be used for malicious objectives, or even changed into military applications, and that the same may be true in the opposite way. 269 The assumption that such a contraposition adequately depicts the whole picture, on the other hand, is erroneous; as some have argued, artificial intelligence is, to the maximum extent feasible, a general-purpose technology.
Bawack, R.E., Wamba, S.F. and Carillo, K.D.A., 2021. Exploring the role of personality, trust, and privacy in customer experience performance during voice shopping: Evidence from SEM and fuzzy set qualitative comparative analysis. International Journal of Information Management, 58, p.102309.
Chung, K.C., Chen, C.H., Tsai, H.H. and Chuang, Y.H., 2021. Social media privacy management strategies: a SEM analysis of user privacy behaviors. Computer Communications, 174, pp.122-130.
Culot, G., Fattori, F., Podrecca, M. and Sartor, M., 2019. Addressing industry 4.0 cybersecurity challenges. IEEE Engineering Management Review, 47(3), pp.79-86.
Hadlington, L., 2021. The “human factor” in cybersecurity: Exploring the accidental insider. In Research anthology on artificial intelligence applications in security (pp. 1960-1977). IGI Global.
Kuzlu, M., Fair, C. and Guler, O., 2021. Role of artificial intelligence in the Internet of Things (IoT) cybersecurity. Discover Internet of things, 1(1), pp.1-14.
Liu, N., Nikitas, A. and Parkinson, S., 2020. Exploring expert perceptions about the cyber security and privacy of Connected and Autonomous Vehicles: A thematic analysis approach. Transportation research part F: traffic psychology and behaviour, 75, pp.66-86.
Mendhurwar, S. and Mishra, R., 2021. Integration of social and IoT technologies: architectural framework for digital transformation and cyber security challenges. Enterprise Information Systems, 15(4), pp.565-584.
Mosteanu, N.R., 2020. Artificial intelligence and cyber security–face to face with cyber attack–a maltese case of risk management approach. Ecoforum Journal, 9(2).
Mosteanu, N.R., 2020. Artificial Intelligence and Cyber Security–A Shield against Cyberattack as a Risk Business Management Tool–Case of European Countries. Quality-Access to Success, 21(175).
Mothukuri, V., Parizi, R.M., Pouriyeh, S., Huang, Y., Dehghantanha, A. and Srivastava, G., 2021. A survey on security and privacy of federated learning. Future Generation Computer Systems, 115, pp.619-640.
Mozzaquatro, B.A., Agostinho, C., Goncalves, D., Martins, J. and Jardim-Goncalves, R., 2018. An ontology-based cybersecurity framework for the internet of things. Sensors, 18(9), p.3053.
Nadarzynski, T., Miles, O., Cowie, A. and Ridge, D., 2019. Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study. Digital health, 5, p.2055207619871808.
Know more about UniqueSubmission’s other writing services: