Cybersecurity Law, Policy, and Institutions (version 3.1)

U of Texas Law, Public Law Research Paper No. 716

274 Pages Posted: 26 Mar 2020 Last revised: 24 Aug 2021

Robert Chesney

University of Texas School of Law

Date Written: August 23, 2021

This is the full text of my interdisciplinary “eCasebook” designed from the ground up to reflect the intertwined nature of the legal and policy questions associated with cyber-security. My aim is to help the reader understand the nature and functions of the various government and private-sector actors associated with cyber-security in the United States, the policy goals they pursue, the issues and challenges they face, and the legal environment in which all of this takes place. It is designed to be accessible for beginners from any disciplinary background, yet useful to experienced audiences too. The first part of the book focuses on the “defensive” perspective (meaning that we will assume an overarching policy goal of minimizing unauthorized access to or disruption of computer systems). The second part focuses on the “offensive” perspective (meaning that there are contexts in which unauthorized access or disruption might actually be desirable as a matter of policy). In short, the book is a guided tour of the broad cyber-security landscape, suitable both for classroom use and for independent study.

Keywords: Cybersecurity, Information Security, Infosec, CFAA, FTC Act, Sanctions, CYBERCOM, NSA, FBI, Hackback, Network Investigative Techniques, Deterrence, Cyberdeterrence, CISA, Information-Sharing

Suggested Citation: Suggested Citation

Robert Chesney (Contact Author)

University of texas school of law ( email ).

727 East Dean Keeton Street Austin, TX 78705 United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics, related ejournals, university of texas school of law legal studies research paper series.

Subscribe to this free journal for more curated articles on this topic

Legal Education eJournal

Subscribe to this fee journal for more curated articles on this topic

Cyberspace Law eJournal

Law educator: courses, materials & teaching ejournal.

Subscribe to this journal for more curated articles on this topic

National Security & Foreign Relations Law eJournal

Cybersecurity, privacy, & networks ejournal, cybersecurity & data privacy law & policy ejournal, data science, data analytics & informatics ejournal, information policy, ethics, access & use ejournal, libraries & information technology ejournal, library services & librarianship ejournal.

  • DOI: 10.4018/978-1-61350-132-0
  • Corpus ID: 155648033

Investigating Cyber Law and Cyber Ethics: Issues, Impacts and Practices

  • Published 30 September 2011
  • Law, Computer Science

31 Citations

Legal framework for the enforcement of cyber law and cyber ethics in nigeria., identifying the ethics of emerging information and communication technologies: an essay on issues, concepts and method, gendered violence and victim-blaming: the law's troubling response to cyber-harassment and revenge pornography, cyberbullying: a sociological approach, the need for specific penalties for hacking in criminal law, the significance of the ethics of respect, reflections on cyberethics education for millennial software engineers, transhumanism and its critics: five arguments against a posthuman future, online social networks and young people’s privacy protection: the role of the right to be forgotten, postgraduate research during covid-19 in a south african higher education institution: inequality, ethics, and requirements for a reimagined future, related papers.

Showing 1 through 3 of 0 Related Papers

Close Menu

MEMBERSHIP PROGRAMS

  • Law.com Pro
  • Law.com Pro Mid-Market
  • Global Leaders In Law
  • Global Leaders In Law Advisers
  • Private Client Global Elite

MEDIA BRANDS

  • Law.com Radar
  • American Lawyer
  • Corporate Counsel
  • National Law Journal
  • Legal Tech News

New York Law Journal

  • The Legal Intelligencer
  • The Recorder
  • Connecticut Law Tribune
  • Daily Business Review
  • Daily Report
  • Delaware Business Court Insider
  • Delaware Law Weekly

New Jersey Law Journal

  • Texas Lawyer
  • Supreme Court Brief
  • Litigation Daily
  • Deals & Transactions
  • Law Firm Management
  • Legal Practice Management

Legal Technology

  • Intellectual Property
  • Cybersecurity
  • Law Journal Newsletters
  • Analyst Reports
  • Diversity Scorecard
  • Kirkland & Ellis
  • Latham & Watkins
  • Baker McKenzie
  • Verdict Search
  • Law.com Compass
  • China Law & Practice
  • Insurance Coverage Law Center
  • Law Journal Press
  • Lean Adviser Legal
  • Legal Dictionary
  • Law Catalog
  • Expert Witness Search
  • Recruiters Directory
  • Editorial Calendar

Legal Newswire

  • Lawyer Pages
  • Law Schools
  • Women in Influence (WIPL)
  • GC Profiles
  • How I Made It
  • Instant Insights
  • Special Reports
  • Resource Center
  • LMA Member Benefits
  • Legal Leaders
  • Trailblazers
  • Expert Perspectives
  • Lawjobs.com
  • Book Center
  • Professional Announcements
  • Asset & Logo Licensing

Close Search

Content Source

Content Type

cyber law and ethics research paper

About Us  |  Contact Us  |  Site Map

Advertise  |  Customer Service  |  Terms of Service

FAQ  |  Privacy Policy

Copyright © 2021 ALM Global, LLC.

All Rights Reserved.

cyber law and ethics research paper

  • Law Topics Litigation Transactional Law Law Firm Management Law Practice Management Legal Technology Intellectual Property Cybersecurity Browse All ›
  • Surveys & Rankings Amlaw 100 Amlaw 200 Global 200 NLJ 500 A-List Diversity Scorecard Browse All ›
  • Cases Case Digests Federal Court Decisions State Court Decisions
  • People & Community People & Community Q&A Career Annoucements Obituaries
  • Judges & Courts Part Rules Judicial Ethics Opinions Court Calendar Court Notes Decision - Download Court Calendar - Download
  • Public Notice & Classifieds Public Notices & Classifieds Place a Public Notice Search Public Notices Browse Classifieds Place a Classified
  • All Sections Events In Brief Columns Editorials Business of Law NY Top Verdicts Instant Insights Special Sections The Newspaper Special Supplements Expert Witness Search Lawjobs Book Center CLE Center Video Sitemap

cyber law and ethics research paper

Navigating AI in Legal Practice: A Road Map for In-House Counsel

Deploying Gen AI in these use cases need not raise the prospect of a threat to legal jobs, two contributors explained in a recent webinar. Instead, it’s a means to improve efficiencies and to allow lawyers to focus their energies on priorities rather than routine matters.

August 27, 2024 at 03:25 PM

6 minute read

Share with Email

Thank you for sharing.

In today’s fast-paced technological landscape, generative artificial intelligence is transforming various sectors, including the legal field. Even for experienced in-house lawyers, the prospect of integrating this technology into the work of a legal department can be daunting. But the promise is real. This road map seeks to assist in-house counsel by setting out a baseline checklist to get this work started.

Gen AI’s Use Cases in Legal Departments

To start, separate out two categories of use cases.

  • Generic Use Cases : These involve using large language models (i.e., AI tools that draw from vast datasets) to answer questions or generate content based on their training data. For example, asking ChatGPT for information on a legal development or using a Gen AI tool to fill out a research task.
  • Grounded Use Cases : These involve grounding the Gen AI in specific documents or datasets, enabling it to provide answers based on proprietary information.

Want to continue reading? Become an ALM Digital Reader for Free!

Benefits of a digital membership.

  • Free access to 1 article* every 30 days
  • Access to the entire ALM network of websites
  • Unlimited access to the ALM suite of newsletters
  • Build custom alerts on any search topic of your choosing
  • Search by a wide range of topics

Register Now

Already have an account? Sign In Now

*May exclude premium content

You Might Like

cyber law and ethics research paper

The Late Justice Sondra Miller and 'Best Interests of the Child'

By Brian Graifman

cyber law and ethics research paper

Amassing Super-Wealth With Illusions of Immortality

By Joseph W. Bellacosa

cyber law and ethics research paper

Obtaining the Criminal Client's Story

By Joel Cohen

cyber law and ethics research paper

Art, Shame and Criminal Law

By David Lenefsky

Trending Stories

Sidley Offers Associates 'Managing' Titles. Why Isn't It Catching On?

The American Lawyer

Clifford Chance Amsterdam Partner Bas Boris Visser Dies Unexpectedly

International Edition

SEC, Richard Heart Clash in Dueling Motions Over $1 Billion Unregistered Securities Litigation

Gibson Dunn Restructuring Partner Exits for Latham After 16 Months

A Law Firm Was Hacked. Now It Faces a Class Action Lawsuit

Law.com Pro

  • 25 Years of the Am Law 200: Is Size as a Strategy a Winning Formula?
  • People, Places & Profits, Part III: Are Law Firm Financial Metrics Keeping Pace With Inflationary Growth?
  • The A-List, Innovation, and Professional Development: How Market Trends Are Impacting What it Takes to Be a Well-Rounded Firm

Featured Firms

Law Offices of Gary Martin Hays & Associates P.C. 75 Ponce De Leon Ave NE Ste 101 Atlanta , GA 30308 (470) 294-1674 www.garymartinhays.com

Law Offices of Mark E. Salomone 2 Oliver St #608 Boston , MA 02109 (857) 444-6468 www.marksalomone.com

Smith & Hassler 1225 N Loop W #525 Houston , TX 77008 (713) 739-1250 www.smithandhassler.com

Presented by BigVoodoo

More From ALM

  • Events & Webcasts

The New York Law Journal honors attorneys and judges who have made a remarkable difference in the legal profession in New York.

The African Legal Awards recognise exceptional achievement within Africa s legal community during a period of rapid change.

Consulting Magazine identifies the best firms to work for in the consulting profession.

Description: Fox Rothschild has an opening in the West Palm Beach, FL office for an associate in our Labor & Employment Department. The ...

About Us:We are a dedicated small plaintiff personal injury trial law practice committed to providing personalized and thorough representati...

White Plains cyber security group seeks litigation law firm.Breach of contract $11,000,000 plusNow in Supreme court WestchesterWill conside...

Professional Announcement

Subscribe to New York Law Journal

Don't miss the crucial news and insights you need to make informed legal decisions. Join New York Law Journal now!

Already have an account? Sign In

AI and law: ethical, legal, and socio-political implications

  • Published: 26 March 2021
  • Volume 36 , pages 403–404, ( 2021 )

Cite this article

cyber law and ethics research paper

  • John-Stewart Gordon 1  

9981 Accesses

11 Citations

Explore all metrics

Avoid common mistakes on your manuscript.

We live in interesting times. Humanity has witnessed unprecedented technological advances with respect to artificial intelligence (AI), which now impacts our daily lives through e.g. our smartphones and the Internet of Things. AI determines the result of our credit and loan applications; in the United States, it often informs parole decisions; and it pervades our work environments.

In recent decades, we have seen the positive effects of AI in almost every area of our lives, but we have also encountered significant ethical and legal challenges in such areas as autonomous transportation, machine bias, and the black box problem. Concerns have also arisen regarding the rapid development and increasing use of smart technologies, particularly with respect to their impact on fundamental rights (Gordon 2020 ).

This special issue provides an excellent overview of current debates in the realm of AI and law. It contains timely and original articles that thoroughly examine the ethical, legal, and socio-political implications of AI and law as viewed from various academic perspectives, such as philosophy, theology, law, medicine, and computer science. The issues covered include, for example, the key concept of personhood and its legal and ethical dimensions, AI in healthcare, legal regulation of AI, and the legal and ethical issues related to autonomous systems.

In my view, the papers reveal among other things—perhaps not surprisingly—that the current legal system is ill-equipped to solve the hot issues created by the ever-increasing technological advances in AI. In other words, we need proper AI regulation to deal with such present and anticipated issues as machine bias and legal decision making, electronic personhood, and legal responsibility concerning autonomous machines (e.g., autonomous transportation). We could refer to the needed framework as a General AI Law (GAIL). By nature, AI does not stop at national borders; it is inherently global. Therefore, humanity needs a global approach to solve the legal problems that AI poses. Many of the papers in this special issue provide interesting solutions to persistent problems and thereby attempt to shape the ongoing debates quite substantially.

Most domains of human life are, legally speaking, highly regulated. However, today’s attorneys and judges are, for the most part, not quite literate with regard to the implications of AI for law, the legal system, and legal education. To address the changes resulting from the growing application of AI, we must revise our legal curricula. However, one can make effective changes to a system only if one has a proper understanding of the issues at hand. Updating professional legal education in this area will greatly benefit society, since it will enable legal experts to provide better service and to support policymakers in creating the needed GAIL.

It is impossible for me, in this brief editorial, to do justice to all the papers contained in this special issue, but I would like to briefly highlight two important topics that are either explicitly or implicitly addressed in many of the papers. The first topic, which is examined explicitly by several authors, concerns the concept of personhood. Kestutis Mosakas defends, quite convincingly in my view, the traditional consciousness criterion for moral status in the context of social robots, in opposition to some rival approaches including Gunkel’s ( 2012 ) famous social-relational approach. Joshua Jowitt, on the other hand, adheres to a Kantian-oriented concept of agency as the basis for legal personhood and thereby offers a moral foundation for the ongoing legal debate over ascribing legal personhood to robots. When reading Jowitt, however, we should keep in mind that the concept of agency necessarily presupposes consciousness, since it seems impossible that an entity that lacks consciousness could be deemed a responsible agent. The reverse is not true; consciousness may, at some point, lead to agency but does not presuppose it.

The concept of personhood is also examined from different vantage points in a joint paper by David Gunkel (from the field of philosophy) and Jordan Wales (from theology). While Gunkel defends his well-known phenomenological approach to moral robots, Wales argues against this approach by claiming that robots are not “natural” persons by definition. This is because they are not endowed with consciousness and are not oriented toward a self-aware inter-subjectivity, which Wales sees as the basis for compassion toward fellow persons. In general, the interesting debate between Gunkel and Wales displays quite prominently the different lines of argumentation with respect to the concept of personhood.

Finally, on this first topic, John-Stewart Gordon provides a substantial analysis of the concepts of moral and legal personhood and also examines their complex relation. He concludes that current robots do not qualify for personhood but that future robots may do so based on their technological sophistication. Gordon, like Jowitt, claims that one should use a uniform criterion to determine the eligibility of all entities for moral status, without making any exceptions—for example, regarding how the entity came into existence. Ultimately, the concept of personhood—whatever that means in detail—is the very foundation of our moral and legal rights. If robots meet this threshold at some point, then it is no longer up to us to decide whether they are eligible for a moral status and rights; they must be viewed as entitled to this eligibility based on their capabilities, independently of our say-so.

This leads us to the second topic that underlies much of the discussion in this special issue—the meaning of moral agency for AI machines. This topic is quite significant with respect to the whole idea of holding intelligent machines or robots morally responsible for their actions. However, many of the papers in this special issue sidestep this point without addressing it directly, either because the authors believe that, at some future point, robots will become moral agents or because their analysis does not require artificial moral agency in the first place. An exception is the provocative paper by Carissa Veliz, who defends the view that algorithms or machines are not moral agents. Her line of reasoning is as follows: Conscious experience or sentience is necessary for moral agency, and since algorithms are not sentient by nature, they are therefore not moral agents. To prove her point, Veliz claims that algorithms are similar to moral zombies, and since moral zombies are not moral agents, one is justified in claiming that the same is true for algorithms. As she states, “Only beings who can experience pain and pleasure can understand what it means to inflict pain or cause pleasure, and only those with this moral understanding can be moral agents.”

My very brief response to Veliz is that, indeed, current intelligent and autonomous machines lack moral agency given their limited capabilities but that this may change over time. Her particular view that sentience is necessary for moral agency is, at least in my view, to some degree misleading, since it would rule out those human beings who reportedly suffer from congenital analgesia and are therefore unable to experience sensations such as pain. Whether such people can fully understand what pain is remains an open question; quite similar to the question of whether people who are congenitally colour-blind can understand what colour vision really is. However, it seems clear that people with congenital analgesia do understand that it is morally wrong to intentionally inflict pain on others. Their understanding seems to be based on their intellectual capacity to imagine what pain could mean for other people, rather than on any personal experience of pain. Therefore, I am rather hesitant to agree that sentience is, in general, necessary for moral agency. Footnote 1

I would like to thank the contributing authors for their excellent and challenging papers, which hold great promise to shape this emerging field significantly. I am also deeply thankful to all referees for their outstanding job in providing detailed and helpful comments. I hope that this special issue will provide a good start for discussing some of our most challenging current legal and ethical problems related to AI. This is not the end; this is the beginning.

I believe that this is only one possible counterexample among others, but this editorial is not the place to engage in a further response to Veliz’s paper.

Gordon J-S (ed) (2020) Smart technologies and fundamental rights. Brill/Rodopi, Leiden

Google Scholar  

Gunkel D (2012) The machine question: critical perspectives on AI, robots, and ethics. MIT Press, Cambridge, Mass

Book   Google Scholar  

Download references

Author information

Authors and affiliations.

Department of Philosophy, Faculty of Humanities, Vytautas Magnus University, V. Putvinskio g. 23 (R 306), 44243, Kaunas, Lithuania

John-Stewart Gordon

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to John-Stewart Gordon .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Gordon, JS. AI and law: ethical, legal, and socio-political implications. AI & Soc 36 , 403–404 (2021). https://doi.org/10.1007/s00146-021-01194-0

Download citation

Accepted : 18 March 2021

Published : 26 March 2021

Issue Date : June 2021

DOI : https://doi.org/10.1007/s00146-021-01194-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

Try AI-powered search

AI needs regulation, but what kind, and how much?

Different countries are taking different approaches to regulating artificial intelligence.

A computer covered in hazard tape.

Your browser does not support the <audio> element.

F or decades , the field of artificial intelligence ( AI ) was a laughing stock. It was mocked because, despite its grand promises, progress was so slow. The tables have turned. Advances in the past decade have prompted a growing concern that progress in the field is now dangerously rapid—and that something needs to be done about it. Yet there is no consensus on what should be regulated, how or by whom. What exactly are the risks posed by artificial intelligence, and how should policymakers respond?

Perhaps the best-known risk is embodied by the killer robots in the “Terminator” films—the idea that AI will turn against its human creators. The tale of the hubristic inventor who loses control of his own creation is centuries old. And in the modern era people are, observes Chris Dixon, a venture capitalist, “trained by Hollywood from childhood to fear artificial intelligence”. A version of this thesis, which focuses on the existential risks (or “x-risks”) to humanity that might someday be posed by AI , was fleshed out by Nick Bostrom, a Swedish philosopher, in a series of books and papers starting in 2002. His arguments have been embraced and extended by others including Elon Musk, boss of Tesla, SpaceX and, regrettably, X.

Those in this “ AI safety” camp, also known as “ AI doomers”, worry that it could cause harm in a variety of ways. If AI systems are able to improve themselves, for example, there could be sudden “take off” or “explosion” where AI s beget more powerful AI s in quick succession. The resulting “superintelligence” would far outsmart humans, doomers fear, and might have very different motivations from its human creators. Other doomer scenarios involve AI s carrying out cyber-attacks, helping with the creation of bombs and bioweapons and persuading humans to commit terrorist acts or deploy nuclear weapons.

After the release of Chat GPT in November 2022 highlighted the growing power of AI , public debate was dominated by AI -safety concerns. In March 2023 a group of tech grandees, including Mr Musk, called for a moratorium of at least six months on AI development. The following November a group of 100 world leaders and tech executives met at an AI-s afety summit at Bletchley Park in England, declaring that the most advanced (“frontier”) AI models have the “potential for serious, even catastrophic, harm”.

This focus has since provoked something of a backlash. Critics make the case that x-risks are still largely speculative, and that bad actors who want to build bioweapons can already look for advice on the internet. Instead of worrying about theoretical, long-term risks posed by AI , they argue, the focus should be on real risks posed by AI that exist today, such as bias, discrimination, AI -generated disinformation and violation of intellectual-property rights. Prominent advocates of this position, known as the “ AI ethics” camp, include Emily Bender, of the University of Washington, and Timnit Gebru, who was fired from Google after she co-wrote a paper about such dangers.

Examples abound of real-world risks posed by AI systems going wrong. An image-labelling feature in Google Photos tagged black people as gorillas; facial-recognition systems trained on mostly white faces misidentify people of colour; an AI resumé-scanning system built to identify promising job candidates consistently favoured men, even when names and genders of applicants were hidden; algorithms used to estimate reoffending rates, allocate child benefits or determine who qualifies for bank loans have displayed racial bias. AI tools can be used to create “deepfake” videos, including pornographic ones, to harass people online or misrepresent the views of politicians. And AI firms face a growing number of lawsuits from writers, artists and musicians who claim that the use of their intellectual property to train AI models is illegal.

When world leaders and tech executives met in Seoul in May 2024 for another AI gathering, the talk was less about far-off x-risks and more about such immediate problems—a trend likely to continue at the next AI- safety summit, if it is still called that, in France in 2025. The AI- ethics camp, in short, now has the ear of policymakers. This is unsurprising, because when it comes to making laws to regulate AI , a process now under way in much of the world, it makes sense to focus on attending to existing harms—for example by criminalising deepfakes—or on requiring audits of AI systems used by government agencies.

Even so, politicians have questions to answer. How broad should rules be? Is self-regulation sufficient, or are laws needed? Does the technology itself require rules, or only its applications? And what is the opportunity cost of regulations that reduce the scope for innovation? Governments have begun to answer these questions, each in their own way.

At one end of the spectrum are countries which rely mostly on self-regulation, including the Gulf states and Britain (although the new Labour government may change this). The leader of this pack is America. Members of Congress talk about AI risks but no law is forthcoming. This makes President Joe Biden’s executive order on AI , signed in October 2023, the country’s most important legal directive for the technology.

The order requires that firms which use more than 10 26 computational operations to train an AI model, a threshold at which models are considered a potential risk to national security and public safety, have to notify authorities and share the results of safety tests. This threshold will affect only the very largest models. For the rest, voluntary commitments and self-regulation reign supreme. Lawmakers worry that overly strict regulation could stifle innovation in a field where America is a world leader; they also fear that regulation could allow China to pull ahead in AI research.

China’s government is taking a much tougher approach. It has proposed several sets of AI rules. The aim is less to protect humanity, or to shield Chinese citizens and companies, than it is to control the flow of information. AI models’ training data and outputs must be “true and accurate”, and reflect “the core values of socialism”. Given the propensity of AI models to make things up, these standards may be difficult to meet. But that may be what China wants: when everyone is in violation of the regulations, the government can selectively enforce them however it likes.

Europe sits somewhere in the middle. In May, the European Union passed the world’s first comprehensive legislation, the AI Act, which came into force on August 1st and which cemented the bloc’s role as the setter of global digital standards. But the law is mostly a product-safety document which regulates applications of the technology according to how risky they are. An AI -powered writing assistant needs no regulation, for instance, whereas a service that assists radiologists does. Some uses, such as real-time facial recognition in public spaces, are banned outright. Only the most powerful models have to comply with strict rules, such as mandates both to assess the risks that they pose and to take measures to mitigate them.

A new world order?

A grand global experiment is therefore under way, as different governments take different approaches to regulating AI . As well as introducing new rules, this also involves setting up some new institutions. The EU has created an AI Office to ensure that big model-makers comply with its new law. By contrast, America and Britain will rely on existing agencies in areas where AI is deployed, such as in health care or the legal profession. But both countries have created AI- safety institutes. Other countries, including Japan and Singapore, intend to set up similar bodies.

Meanwhile, three separate efforts are under way to devise global rules and a body to oversee them. One is the AI- safety summits and the various national AI- safety institutes, which are meant to collaborate. Another is the “Hiroshima Process”, launched in the Japanese city in May 2023 by the G 7 group of rich democracies and increasingly taken over by the OECD , a larger club of mostly rich countries. A third effort is led by the UN , which has created an advisory body that is producing a report ahead of a summit in September.

These three initiatives will probably converge and give rise to a new international organisation. There are many views on what form it should take. Open AI , the startup behind Chat GPT , says it wants something like the International Atomic Energy Agency, the world’s nuclear watchdog, to monitor x-risks. Microsoft, a tech giant and Open AI ’s biggest shareholder, prefers a less imposing body modelled on the International Civil Aviation Organisation, which sets rules for aviation. Academic researchers argue for an AI equivalent of the European Organisation for Nuclear Research, or CERN . A compromise, supported by the EU , would create something akin to the Intergovernmental Panel on Climate Change, which keeps the world abreast of research into global warming and its impact.

In the meantime, the picture is messy. Worried that a re-elected Donald Trump would do away with the executive order on AI , America’s states have moved to regulate the technology—notably California, with more than 30 AI -related bills in the works. One in particular, to be voted on in late August, has the tech industry up in arms. Among other things, it would force AI firms to build a “kill switch” into their systems. In Hollywood’s home state, the spectre of “Terminator” continues to loom large over the discussion of AI . ■

Explore more

This article appeared in the Schools brief section of the print edition under the headline “Risks and regulations”

How would she govern?

From the August 24th 2024 edition

Discover stories from this section and more in the list of contents

More from Schools brief

cyber law and ethics research paper

LLMs will transform medicine, media and more

But not without a helping (human) hand

cyber law and ethics research paper

How AI models are getting smarter

Deep neural networks are learning diffusion and other tricks

cyber law and ethics research paper

The race is on to control the global supply chain for AI chips

The focus is no longer just on faster chips, but on more chips clustered together

AI firms will soon exhaust most of the internet’s data

Can they create more?

A short history of AI

In the first of six weekly briefs, we ask how AI overcame decades of underdelivering

Finding living planets

Life evolves on planets. And planets with life evolve

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

CYBER SECURITY AND ETHICS ON SOCIAL MEDIA

Profile image of Dr. Mudassir Khan

2017, Journal of Modern Developments in Applied Engineering & Technology Research (IMPACT: JMDAETR)

Cyber security becomes an important field in the information technology. Securing the individual and organization information become the biggest challenges in the present era. Nowadays peoples all over the world are dependent on social media. Social media is very useful in our life but social media is also effected by cyber crimes and increasing day by day. Different social media websites are giving their best services in spite of that cyber crimes are increasing day by day. Still cyber security is a big concern for many in the present day. Nowadays peoples all over the world are addicted towards social media. Social media becomes the part of their life. The main concern is " how to protect social media from cyber crime ". The cyber security is very important for using the social media without and cyber crime but it becomes very difficult task " how to ensure 100% cyber security in the real world ". It also based on cyber security techniques used to solve the cyber crime related problems. It also emphases on ethics and trends changing the face of cyber security.

Related Papers

Azim Rahimi

cyber law and ethics research paper

uduak thompson

SABBIR AHMED , Samia Jafrin

In recent years, the participation in social networking sites has increased dramatically in Bangladesh. The social networking service like Facebook allows creating online profiles and the sharing of personal data with vast networks of friends-and, often, unknown numbers of strangers. Research has demonstrated that, the impact of threats affects more the female users rather than the male users. In Bangladesh mostly it is seen that, the victims do not want to take recourse to law for various reasons, especially for social fear and humiliation. Proper law utilizations of the existing cyber laws and new law should be proposed by the law agencies to minimize the threats, as well as people should be more aware and ethical morally.

Self-Published

Michael Nycyk

Book three in the Cyber Library Reference Book series

Manish Mittal

A Review has been done on the given topic to know the affects and effects of the usage of Internet on children. Internet usage may affect & effect the physical, social and mental development of children. The review is done using secondary data from research articles, papers, PhD. thesis, and other publications on the related topic. It is found that internet usage more or less is required for the development of children but concluded that Internet usage by children should be allowed under very strict supervision of parents and/or someone responsible. The primary data on the given research is collected to analyze & quote the findings of the research. The results are helpful to understand the likely affects and effects of internet usage on children. Steps that should be taken by society to provide better understanding of internet usage to children. To define rules and regulations for internet usage by children.

Yury Nikulin

Ngezy Mayila

Marwah Obaid

Advanced and rapid developments in the field of computer and wireless technologies makes easy and possible to be a direct part of electronic media. Social media is an attractive, informative, useful, and approachable way to get information. In last few years, there is an increase observed in the smartphone, smart tablet, and wireless broadband market in Pakistan. It is because of the popularity of social media, its access, and usage in most of citizens. It is a positive prospect for the country, however; there are many issues are rising with the usage of social networking sites. In this paper, the social media technologies were and reasons behind the increase in usage of social media Pakistani netizens (Internet users) is discussed. Moreover, the challenges regarding social media such as cyber-crimes, cyber blackmailing, ethics, security and identity protection in Pakistan is discussed in this Paper.

Ma Francesca Perez

Asherry B P Magalla

To understand something need a deep research. Most of circumstances and situations we are facing can be easily being resolved after making a serious analysis of the problem. This is why it is very important for any concept, idea or fact which needs answers for its existence, to make a wide and deep analysis of it. That is why origin, history or nature of something must be described so as to have a clear view of exactly what happened, what continues to happen and what would happen if the same problem would not be prevented or controlled. This paper presents important issues on the Cyber Security; the attempt here is simply to familiarize the reader with a careful understanding of the Cyber Crimes. The author traces back the meaning, history, and types of Cyber Crimes (this is the same explained in terms of internet). Also one has to know the concept of Cyber Security; this includes meaning, background, types if any and important of Cyber Security. The last part is personal assessment basically in Tanzanian Cyber Laws on how cyber security has been stipulated.

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

Amnah Rashid

Asad Munir Lecturer Mass Communication , Asad Munir

Journal of Computer Science IJCSIS

Mahboob Usman

Paul Benjamin Lowry

Ahmad Albab

Prabinda Joshee

Global Information Society Watch 2011 Update 1

Jane Duncan

Association for Progressive Communications (APC)

Iliana Araka , Nikos Koutras

Jinshuang Zhao

Ikhwan Ardianto

Charles K W E S I Aggrey

Susan Nepal

PhD Thesis @uO Research

Baha Abu-Shaqra

ICT for Development Working Paper Series

Monami Haque

Gabrielle Cianfrani

Ciaran Haughton , Ciarán Mc Mahon , Laura O Neill

Dr. K U S H A M Lata

Ritu Srivastava

Prattusha Chakraborty

Katina Michael

Gauri Chakraborty

Tobias Johnson

Mandavi Singh

Misiani Mwencha

Bajaj, Ankit 197 Bakry, Mohamed Abd El Latif …

samia rizvi

IAEME Publication

maidul islam

Dr. Appalayya Meesala

Mohammad Alrefath

Irfan Thanvi

mudalib yare

dhayat dota

LUIS DANIEL SALGADO ROMERO

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

COMMENTS

  1. Ethics in cybersecurity research and practice

    Urgent discussion regarding the ethics of cybersecurity research is needed. This paper critiques existing governance in cyber-security ethics through providing an overview of some of the ethical issues facing researchers in the cybersecurity community and highlighting shortfalls in governance practice. We separate these issues into those facing ...

  2. (PDF) Ethics in cybersecurity research and practice

    Abstract. This paper critiques existing governance in cyber-security ethics through providing an overview of some of the ethical issues facing researchers in the cybersecurity community and ...

  3. A principlist framework for cybersecurity ethics

    Michael Wilson is a Lecturer in the School of Law at Murdoch University. He was awarded his PhD in 2020 for a thesis examining the problem of 'going dark' and Australian digital surveillance law. His research interests include the regulation of cryptography, computer hacking, cybersecurity ethics, surveillance law, and digital evidence.

  4. (PDF) Cybersecurity and Ethics

    Abstract and Figures. This White Paper outlines how the ethical discourse on cybersecurity has developed in the scientific literature, which ethical issues gained interest, which value conflicts ...

  5. The Ethics of Cybersecurity

    Editors: Markus Christen, Bert Gordijn, Michele Loi. First systematic overview of ethics of cybersecurity including case studies. Provides a combined focus on structural, systemic traits and topical debates. Contains rich case studies on practical ethical problems of cybersecurity. Part of the book series: The International Library of Ethics ...

  6. Cybersecurity Law, Policy, and Institutions (version 3.1)

    Abstract. This is the full text of my interdisciplinary "eCasebook" designed from the ground up to reflect the intertwined nature of the legal and policy questions associated with cyber-security.

  7. Ethical Frameworks for Cybersecurity

    The Menlo report was intended to guide research in cybersecurity, understood traditionally as a form of investigation aimed at generalisable knowledge for the benefit of society, and in so far as it deals with human subjects.However, it can also be applied more broadly to cybersecurity operations that involve a research component, e.g. acts of inspections and the collection of intelligence ...

  8. (PDF) Investigating Cyber Law and Cyber Ethics: Issues, Impacts and

    The concept of cyber ethics deals with various code of ethics. There is a great need for cyber ethics as various cyber related issues are increasing like spying, frauds, exploitative conduct. Preventive measures to deal with cyber crime related issues should be known by all. Some awareness programs should be conducted regarding various cyber ...

  9. Cyber Ethics and Law Research Papers

    Deterring Russian cyber warfare: the practical, legal and ethical constraints faced by the United Kingdom Read Publication. This article examines both the nature of the cyber threat that Russia poses to the United Kingdom and the efficacy of the latter's responses to it. It begins, and making use of original Russian sources, with a review of ...

  10. Cyber governance studies in ensuring cybersecurity: an ...

    The research model used in this paper consists of studies obtained from Web of Science (WoS), EBSCO, Scopus, Google Scholar, and TR Index. ... On the theme of cyber law studies and prevention of cybercrime: ... the subject of cyber ethics was discussed and the aim was to explain the truths, mistakes, as well as good and bad behaviors in ...

  11. PDF Investigating cyber law and cyber ethics: Issues, impacts and practices

    the digital ecosystem, this research aims to clarify the complex relationship between these two sectors. In the end, it aims to support a more knowledgeable and moral ... approach to cyber law and cyber ethics in India reflects a concerted effort to create a secure and ethical digital environment, underscoring the nation's commitment to ...

  12. Cyber Law and Ethics Research Papers

    "The IRIE is the official journal of the International Center for Information Ethics (ICIE). It envisions an international as well as intercultural discussion focusing on the ethical impacts of information technology on human practices and thinking, social interaction, other areas of science and research and the society itself."

  13. 1268 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on CYBER LAW. Find methods information, sources, references or conduct a literature review on CYBER LAW

  14. Ethical Approaches to Cybersecurity

    Abstract. This chapter examines current research on cybersecurity ethics. It frames this around three different approaches to the subject. The first ('bottom up') considers ethical issues arising in different case studies and developing groupings of these issues, such as those relating to privacy, those to security, etc.

  15. [PDF] Investigating Cyber Law and Cyber Ethics: Issues, Impacts and

    Investigating Cyber Law and Cyber Ethics: Issues, Impacts and Practices discusses the impact of cyber ethics and cyber law on information technologies and society. Ethical values in computing are essential for understanding and maintaining the relationship between computing professionals and researchers and the users of their applications and programs. While concerns about cyber ethics and ...

  16. PDF An Introduction to Cybersecurity Ethics MODULE AUTHOR: Shannon Vallor

    In the remaining sections of this module, you will have the opportunity to learn more about: Part 1: Important ethical issues in cybersecurity. ontexts Part 5: Ethical 'best practices' for cybersecurity professionalsIn each section of the module, you will be asked to fill in answers to specific questions and/or e.

  17. Ethics in Cybersecurity. What Are the Challenges We Need to ...

    In the field of research, the role of ethics grows more and more every year. One might be surprised but even in the field of technology there is a necessity for experts to understand and to implement ethical principles. Ethics itself could be understood as a code or a moral way by which a person lives and works.

  18. PDF Navigating the Legal Frontiers: Cyber Law Challenges in The Digital

    LEGAL FRONTIERS: CYBER LAW CHALLENGES IN THE DIGITAL. RAKshirin GodaraStudent BBA LL.B.(H), Amity University, Noida, India.Abstract : This research paper explores the evolving landscape of cyber law in the digital e. a, examining the challenges and complexities faced by legal frameworks. It delves into issues such as online privacy, dig.

  19. Investigating Cyber Law and Cyber Ethics

    Investigating Cyber Law and Cyber Ethics: Issues, Impacts and Practices. All's WELL that ends WELL: A comparative analysis of the Constitutional and Administrative Frameworks of Cyberspace and the United Kingdom ... This paper aims to overview research on GTP for explaining the development of the framework and discuss its potential ...

  20. Cyber Threat Awareness for Your Law Firm: Boring Training Modules Aren

    From robust cyber insurance policies to security assessment schedules, organizations can employ a number of different methods to counteract the risks they may face. Whatever the methodology, however, all organizations should strive for a strong cybersecurity culture built on leadership engagement with, and support for, security practices.

  21. Navigating AI in Legal Practice: A Road Map for In-House Counsel

    Legal Research: Gen AI can add ... ethics and AI governance. ... White Plains cyber security group seeks litigation law firm.Breach of contract $11,000,000 plusNow in Supreme court WestchesterWill ...

  22. (PDF) Cyber Laws in India: An Overview

    1. Introduction. Cybercrime is a relatively new type of crime in the world. Any illegal behaviour that occurs on. or via the medium of computers, the internet, or other technology recognised by ...

  23. AI and law: ethical, legal, and socio-political implications

    It contains timely and original articles that thoroughly examine the ethical, legal, and socio-political implications of AI and law as viewed from various academic perspectives, such as philosophy, theology, law, medicine, and computer science. The issues covered include, for example, the key concept of personhood and its legal and ethical ...

  24. The Future of International Scientific Assessments of AI's Risks

    Hadrien Pouget and Claire Dennis contributed equally to the paper and should be jointly cited in references. Name order was randomized. If possible, please cite as follows: Pouget, H., Dennis, C., et al.(2024) 'The Future of International Scientific Assessments of AI's Risks,' Carnegie Endowment for International Peace.

  25. Investigating Cyber Law and Cyber Ethics

    Academia.edu is a platform for academics to share research papers. Investigating Cyber Law and Cyber Ethics (PDF) Investigating Cyber Law and Cyber Ethics | Maryam Ahmed - Academia.edu

  26. AI needs regulation, but what kind, and how much?

    This is unsurprising, because when it comes to making laws to regulate AI, a process now under way in much of the world, it makes sense to focus on attending to existing harms—for example by ...

  27. CYBER SECURITY AND ETHICS ON SOCIAL MEDIA

    The issues and crimes in the cyber technology are generated by the development. Articles can be sent to [email protected] f55 Cyber Security and Ethics on Social Media The Ethical issues in the Information Technology system have been adopted new urgency by the growth of internet e-commerce.