Page 826«..1020..825826827828..840850..»

The artificial intelligence era needs its own Karl Marx | Mint – Mint

For the first time since the 1960s, Hollywood writers and actors went on strike recently. They fear generative artificial intelligence (AI) will take away their jobs. That AI will displace several humans from their present jobs is a reality. By all indications, AI will hit white-collar jobs hardest.

For the first time since the 1960s, Hollywood writers and actors went on strike recently. They fear generative artificial intelligence (AI) will take away their jobs. That AI will displace several humans from their present jobs is a reality. By all indications, AI will hit white-collar jobs hardest.

Job losses are not the only problem that AI could create in an economy. Daron Acemoglu, a Massachusetts Institute of Technology economist, has found compelling evidence for the automation of tasks done by human workers contributing to a slowdown of wage growth and thus worsening inequality in the US. According to Acemoglu, 50% to 70% of the growth in US wage inequality between 1980 and 2016 was caused by automation. This study was done before the surge in the use of AI technologies. Acemoglu worries that AI-based automation will make this income inequality problem even worse. In the words of Diane Coyle, an economist at Cambridge University and the author of Cogs and Monsters: What Economics Is and What It Should Be: An economy of tech millionaires or billionaires and gig workers, with middle-income jobs undercut by automation, will not be politically sustainable."

Hi! You're reading a premium article

Job losses are not the only problem that AI could create in an economy. Daron Acemoglu, a Massachusetts Institute of Technology economist, has found compelling evidence for the automation of tasks done by human workers contributing to a slowdown of wage growth and thus worsening inequality in the US. According to Acemoglu, 50% to 70% of the growth in US wage inequality between 1980 and 2016 was caused by automation. This study was done before the surge in the use of AI technologies. Acemoglu worries that AI-based automation will make this income inequality problem even worse. In the words of Diane Coyle, an economist at Cambridge University and the author of Cogs and Monsters: What Economics Is and What It Should Be: An economy of tech millionaires or billionaires and gig workers, with middle-income jobs undercut by automation, will not be politically sustainable."

In the past, democratic governments had initiated several steps to redistribute economic resources such as land to larger populations in their efforts to avoid the concentration of wealth in too few hands. As in the past, governments across the world have started moving to loosen the stranglehold that Big Tech has on defining the AI agenda. The Digital Public Infrastructure initiatives of the Indian government are an example of large-scale digital empowerment. But the crucial question for policymakers is what more they need to do to manage the fallout of AI adoption, not just in terms of massive job losses, but more so the huge economic inequality that AI could result in.

How many existing jobs will AI take away? Carl Frey and Michael Osbourn from Oxford University posit that AI technologies can replace nearly 47% of US jobs. Which means the income of 47% of the US workforce will be affected and the only way to enable them to attain the same level of income they had before the advent of AI is to re-skill them. Any such re-skilling initiatives will be useful even for those who do have jobs. This applies to workers in the AI industry itself. Several studies have shown that in the fast-evolving field of AI, the half-life of any technology, or the time after which a particular technology becomes obsolete, is just few years. So, just to stay relevant, AI-sector employees need to acquire new learnings on a regular basis.

In the past, haves and have-nots were identified by their ownership or lack thereof of key economic resources, such as land and other productive assets like factories. Today, in the AI economy , haves and have-nots will be decided by those who have the appropriate knowledge and those who do not have it. As the world economy moves forward, whether the challenge for individuals is to get new jobs or to stay relevant in existing jobs, people will have to acquire new knowledge on a continuous basis. In other words, in an AI economy, individuals can never step off the knowledge-acquisition treadmill.

But how easy is it to get people to regularly exercise their minds? Numerous ed-tech companies have sprung up with the promise of imparting various forms of new knowledge. The principal focus of these companies is on developing high-quality content and using modern technology to scale up the distribution of this content. Thanks to the efforts of these ed-tech companies, today it is possible to listen to lectures of the best professors in the world on ones own smartphone.

Up-skilling sounds easy. But there is a problem. For every hundred people who join the courses offered by these ed-tech companies, only a single-digit proportion of individuals actually complete these courses. The vast majority of those starting their knowledge acquisition journeys step off their learning treadmills, often for good, typically leaving the exercise incomplete.

The phenomenon of drop-outs from knowledge acquisition journeys can be attributed to fundamental human nature. The human brain loves the status quo.

It is very difficult to get humans out of their comfort zones. It is even more difficult to get humans to accept the inadequacies of their existing knowledge, burn their past and get them to embrace new learnings. This tendency of humans to hold on to their status quo knowledge, even when it is outdated, could end up as one of the biggest contributors to inequality in an AI-driven economy. Those who do not acquire knowledge on a routine basis could find themselves unable to earn a living.

While there has been a hue and cry over AI technology taking jobs away from humans, there is almost no discussion on equipping individuals to survive this shift through the structured acquisition of new knowledge and skills.

After the Industrial Revolution, significant movements like trade unionization and political philosophies like communism strived hard towards achieving greater equality at the work-place and in the larger economy. Similarly, the need of the hour right now is a similar broad-based social movement which can address the crisis of inequality that AI adoption has begun to generate. The effects of it will be profound and solutions will have to be equally so. Where is the Karl Marx of the AI age?

Excerpt from:
The artificial intelligence era needs its own Karl Marx | Mint - Mint

Read More..

Attention to Attention is What You Need: Artificial Intelligence and … – Psychiatric Times

In just a few months, artificial intelligence (AI) has certainly exploded onto the stage in a way that has surprised many. Take, for instance, the mass popularity of Chat GPT, GPT-3, GPT-2, and BERT. The scale and intelligence, with the advancement of computing power with large data sets, provide fertile ground for AI to take off.1,2

For us in medicine, we are used to applying approaches to diagnosis and treatment that are rooted in deep understanding of disease processes and informed by critical appraisal of evidence-based strategies and experience over time. Medicine has adapted and kept pace with the various emerging technologies and, as a field, has reached many advances.3 Part of the heuristic and epistemological approach is that technology has always been a tool to be applied to the medical process.4

Agency and control have been at the forefront of how we use tools. However, with the introduction of tools, there was some initial trepidation. When one looks, for example, at the evolution of different tools over time, in some ways, every tool has brought on some initial anxiety and fear. One can only imagine the angst of a painter with the emergence of photography, and yet, painting and art have not been displaced.

The emergence of AI has generated much for even those embedded in the technological field. An approach to machine learning and artificial control intelligence should probably stem from an understanding of what it is and what it can do. In taking this approach, we are positioning ourselves in a way to inform industry and help solve problems that are meaningful with an ethical and value-based framework.

The emergence of technology and its adoption in society has brought on various emotions in its adaptation. A number of researchers have explored this area . One particular Model is Gartner's Hype Cycle, whereby new technologies are followed by an up-peak of excitement, followed by a disillusionment phase, and then a normalization phase where one understands the utility and limitations of the new tool.

Another heuristic to understand emerging technology is through an economic perspective. The Kondratiev Wave theory describes economic cycles in the economy and links them with technology. Another researcher in the field of paradigm shifts, Carlota Perez, defines technological revolution as a powerful and highly visible cluster of new and dynamic technologies, products, and industries capable of bringing about an upheaval in the whole fabric of the economy and propelling a long-term upsurge of development.

It is quite astounding that a machine can read large amounts of data and emulate and identify patterns, but, at its heart, not quite understand what it is doing. So, although technology can incorporate an immense amount of knowledge that is often cultivated over many years in a rapid time, it still has challenges with reasoning.

For us in the medical world, it is hard to imagine a system that emulates what we do: Refine the diagnostic process and apply knowledge to patterns based on genetics, epigenetics, life experiences, and responses to various medication therapies, and then fine-tune this to each patient while seeing it from the individuals perspectives and values.

So, one may ask, what is the concern? A recent letter from several technology leaders spoke to the concerns around the rapid deployment of AI.5

In some ways, these technological innovations have always had human beings behind the controls. What is currently challenging and concerning for various individuals, including those in the fields of computer science and engineering, though, is the lack of clarity with which the machine itself can reason and the risk that this can pose. However, although the genie is out of the lamp, we can try to position ourselves at the front and center of the decision-making process and help inform innovators, inventors, and data scientists.

Much of the machine learning model is based on teaching the machine how to learn and reason, drawing from a number of mathematical models. In order to understand the underlying AI technology, it is helpful to take a closer look at how AI models are structured.

Machine Learning Models: Recurrence and Convolution Transformers

Recurrence and convolution transformers are 2 important concepts in AI that have been widely used in machine learning models. Recurrence helps models remember what happened before, whereas convolution finds important patterns in data and transformers focus on understanding relationships between different parts of the input.

Recurrence

Think of recurrence as a memory that helps a model remember information from previous steps. It is useful when dealing with things that happen in a specific order or over time. For example, if you are predicting the next word in a sentence, recurrence helps the model understand the words that came before it. It is like connecting the dots by looking at what happened before to make sense of what comes next.

Convolution

Convolution is like a filter that helps the model find important patterns in data. It is commonly used for tasks involving images or grids of data. Just like our brain focuses on specific parts of an image to understand it, convolution helps the model focus on important details. It looks for features like edges, shapes, and textures, allowing the model to recognize objects or understand the structure of the data.

Transformers

Transformers are like smart attention machines. They excel in understanding relationships between different parts of a sentence or data without needing to process them in order. They can find connections between words that are far apart from each other. Transformers are especially powerful in tasks like language translation, where understanding the context of each word is crucial. They work by paying attention to different words and weighing their importance based on their relationships.

How Transformers Became So Impactful

A landmark 2017 paper on AI titled, Attention Is All You Need by Vaswani and colleagues6 laid important work in understanding the transformer model. Unlike recurrence and convolution, the transformer model relies heavily on the self-attention mechanism. Self-attention allows the model to focus on different parts of the input sequence during processing, enabling it to capture long-range dependencies effectively. Attention mechanisms allow the model-to-model dependencies between input and output sequences without considering their distance. This allows the machine incredible advanced capabilities, especially when powered with advanced computing power.

Machine Learning Frameworks

Currently, there are several frameworks that can be applied to the machine learning process:

The CRISP-DM approach involves about 8 phases:

Concerns With AI

In medicine and psychiatry, we are familiar with distortions that can arise in human thinking. We know that thinking about what we are thinking about becomes an important skill in training the mind. In AI, the loss of human control and input in informing the machines is at the heart of many concerns. There are several reasons for this.

Addressing these concerns requires a comprehensive approach that emphasizes transparency, accountability, fairness, and human oversight in the development and deployment of AI systems. It is crucial to consider the societal impact of AI and to establish regulations and guidelines that ensure its responsible and ethical use.

Positives and Negatives in the Medical Community

For the medical community specifically, this new technology brings both positives and negatives. By leveraging the potential of AI while addressing its limitations and concerns, health care can benefit from improved diagnostics.

Positive aspects:

Negative aspects:

Evaluating AI Technology

A proposed mechanism for physicians and health care workers to evaluate technology might be a framework similar to what we have identified as an evidence-based tool. Here are some guiding questions for evaluating the technology:

A couple of suggested evaluation tools that can be used in interpreting AI models in health care are listed in Figures 1 and 2. These mnemonics can serve as a framework for health care professionals to systematically evaluate and interpret AI models, ensuring that ethical considerations, transparency, and accuracy are prioritized in the implementation and use of AI in health care.

Dr Amaladoss is a clinical assistant professor in the Department of Psychiatry and Behavioral Neurosciences at McMaster University. He is a clinicianscientistand educator who has been a recipientof anumberof teaching awards. His current research involves personalized medicine and theintersection of medicine and emerging technologies including developing machine learning models and AI in improving health care. Dr Amaladoss has also been involved with the recent task force on AI and emerging digital technologies at the Royal College of Physicians and Surgeons.

Dr Ahmed is an internal medicine resident at the University of Toronto. He has led and published research projects in multiple domains including evidence-based medicine, medical education, and cardiology.

References

1. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence.Nat Med. 2019;25(1):44-56.

2. Szolovits P. Ed. Artificial Intelligence in Medicine. Routledge; 1982.

3. London AJ. Artificial intelligence in medicine: overcoming or recapitulating structural challenges to improving patient care?Cell Rep Med. 2022;3(5):100622.

4. Larentzakis A, Lygeros N. Artificial intelligence (AI) in medicine as a strategic valuable tool.Pan Afr Med J. 2021;38:184.

5. Mohammad L, Jarenwattananon P, Summers J. An open letter signed by tech leaders, researchers proposes delaying AI development. NPR. March 29, 2023. Accessed August 1, 2023. https://www.npr.org/2023/03/29/1166891536/an-open-letter-signed-by-tech-leaders-researchers-proposes-delaying-ai-developme

6. Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. NIPS. June 12, 2017. Accessed August 10, 2023. https://www.semanticscholar.org/paper/Attention-is-All-you-Need-Vaswani-Shazeer/204e3073870fae3d05bcbc2f6a8e263d9b72e776

View post:
Attention to Attention is What You Need: Artificial Intelligence and ... - Psychiatric Times

Read More..

Revolutionizing healthcare: the role of artificial intelligence in clinical … – BMC Medical Education

Suleimenov IE, Vitulyova YS, Bakirov AS, Gabrielyan OA. Artificial Intelligence:what is it? Proc 2020 6th Int Conf Comput Technol Appl. 2020;225. https://doi.org/10.1145/3397125.3397141.

Davenport T, Kalakota R. The potential for artificial intelligence in Healthcare. Future Healthc J. 2019;6(2):948. https://doi.org/10.7861/futurehosp.6-2-94.

Article Google Scholar

Russell SJ. Artificial intelligence a modern approach. Pearson Education, Inc.; 2010.

McCorduck P, Cfe C. Machines who think: a personal inquiry into the history and prospects of Artificial Intelligence. AK Peters; 2004.

Jordan MI, Mitchell TM. Machine learning: Trends, perspectives, and prospects. Science. 2015;349(6245):25560. https://doi.org/10.1126/science.aaa8415.

Article Google Scholar

VanLEHN K. The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychol. 2011;46(4):197221. https://doi.org/10.1080/00461520.2011.611369.

Article Google Scholar

Topol EJ. High-performance medicine: the convergence of human and Artificial Intelligence. Nat Med. 2019;25(1):4456. https://doi.org/10.1038/s41591-018-0300-7.

Article Google Scholar

Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):1158. https://doi.org/10.1038/nature21056.

Article Google Scholar

Myszczynska MA, Ojamies PN, Lacoste AM, Neil D, Saffari A, Mead R, et al. Applications of machine learning to diagnosis and treatment of neurodegenerative Diseases. Nat Reviews Neurol. 2020;16(8):44056. https://doi.org/10.1038/s41582-020-0377-8.

Article Google Scholar

Ahsan MM, Luna SA, Siddique Z. Machine-learning-based disease diagnosis: a comprehensive review. Healthcare. 2022;10(3):541. https://doi.org/10.3390/healthcare10030541.

Article Google Scholar

McKinney SM, Sieniek M, Godbole V, Godwin J, Antropova N, Ashrafian H, et al. International evaluation of an AI system for breast cancer screening. Nature. 2020;577(7788):8994. https://doi.org/10.1038/s41586-019-1799-6.

Article Google Scholar

Kim H-E, Kim HH, Han B-K, Kim KH, Han K, Nam H, et al. Changes in cancer detection and false-positive recall in mammography using Artificial Intelligence: a retrospective, Multireader Study. Lancet Digit Health. 2020;2(3). https://doi.org/10.1016/s2589-7500(20)30003-0.

Han SS, Park I, Eun Chang S, Lim W, Kim MS, Park GH, et al. Augmented Intelligence Dermatology: deep neural networks Empower Medical Professionals in diagnosing skin Cancer and Predicting Treatment Options for 134 skin Disorders. J Invest Dermatol. 2020;140(9):175361. https://doi.org/10.1016/j.jid.2020.01.019.

Article Google Scholar

Haenssle HA, Fink C, Schneiderbauer R, Toberer F, Buhl T, Blum A, et al. Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann Oncol. 2018;29(8):183642. https://doi.org/10.1093/annonc/mdy166.

Article Google Scholar

Li S, Zhao R, Zou H. Artificial intelligence for diabetic retinopathy. Chin Med J (Engl). 2021;135(3):25360. https://doi.org/10.1097/CM9.0000000000001816.

Article Google Scholar

Alfaras M, Soriano MC, Ortn S. A fast machine learning model for ECG-based Heartbeat classification and arrhythmia detection. Front Phys. 2019;7. https://doi.org/10.3389/fphy.2019.00103.

Raghunath S, Pfeifer JM, Ulloa-Cerna AE, Nemani A, Carbonati T, Jing L, et al. Deep neural networks can predict new-onset atrial fibrillation from the 12-lead ECG and help identify those at risk of atrial fibrillationrelated stroke. Circulation. 2021;143(13):128798. https://doi.org/10.1161/circulationaha.120.047829.

Article Google Scholar

Becker J, Decker JA, Rmmele C, Kahn M, Messmann H, Wehler M, et al. Artificial intelligence-based detection of pneumonia in chest radiographs. Diagnostics. 2022;12(6):1465. https://doi.org/10.3390/diagnostics12061465.

Article Google Scholar

Mijwil MM, Aggarwal K. A diagnostic testing for people with appendicitis using machine learning techniques. Multimed Tools Appl. 2022;81(5):701123. https://doi.org/10.1007/s11042-022-11939-8.

Article Google Scholar

Undru TR, Uday U, Lakshmi JT, et al. Integrating Artificial Intelligence for Clinical and Laboratory diagnosis - a review. Maedica (Bucur). 2022;17(2):4206. https://doi.org/10.26574/maedica.2022.17.2.420.

Article Google Scholar

Peiffer-Smadja N, Dellire S, Rodriguez C, Birgand G, Lescure FX, Fourati S, et al. Machine learning in the clinical microbiology laboratory: has the time come for routine practice? Clin Microbiol Infect. 2020;26(10):13009. https://doi.org/10.1016/j.cmi.2020.02.006.

Article Google Scholar

Smith KP, Kang AD, Kirby JE. Automated interpretation of Blood Culture Gram Stains by Use of a deep convolutional neural network. J Clin Microbiol. 2018;56(3):e0152117. https://doi.org/10.1128/JCM.01521-17.

Article Google Scholar

Weis CV, Jutzeler CR, Borgwardt K. Machine learning for microbial identification and antimicrobial susceptibility testing on MALDI-TOF mass spectra: a systematic review. Clin Microbiol Infect. 2020;26(10):13107. https://doi.org/10.1016/j.cmi.2020.03.014.

Article Google Scholar

Go T, Kim JH, Byeon H, Lee SJ. Machine learning-based in-line holographic sensing of unstained malaria-infected red blood cells. J Biophotonics. 2018;11(9):e201800101. https://doi.org/10.1002/jbio.201800101.

Article Google Scholar

Smith KP, Kirby JE. Image analysis and artificial intelligence in infectious disease diagnostics. Clin Microbiol Infect. 2020;26(10):131823. https://doi.org/10.1016/j.cmi.2020.03.012.

Article Google Scholar

Vandenberg O, Durand G, Hallin M, Diefenbach A, Gant V, Murray P, et al. Consolidation of clinical Microbiology Laboratories and introduction of Transformative Technologies. Clin Microbiol Rev. 2020;33(2). https://doi.org/10.1128/cmr.00057-19.

Panch T, Szolovits P, Atun R. Artificial Intelligence, Machine Learning and Health Systems. J Global Health. 2018;8(2). https://doi.org/10.7189/jogh.08.020303.

Berlyand Y, Raja AS, Dorner SC, Prabhakar AM, Sonis JD, Gottumukkala RV, et al. How artificial intelligence could transform emergency department operations. Am J Emerg Med. 2018;36(8):15157. https://doi.org/10.1016/j.ajem.2018.01.017.

Article Google Scholar

Matheny ME, Whicher D, Thadaney Israni S. Artificial Intelligence in Health Care: a Report from the National Academy of Medicine. JAMA. 2020;323(6):50910. https://doi.org/10.1001/jama.2019.21579.

Article Google Scholar

Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):23043. https://doi.org/10.1136/svn-2017-000101.

Article Google Scholar

Gandhi SO, Sabik L. Emergency department visit classification using the NYU algorithm. Am J Manag Care. 2014;20(4):31520.

Google Scholar

Hautz WE, Kmmer JE, Hautz SC, Sauter TC, Zwaan L, Exadaktylos AK, et al. Diagnostic error increases mortality and length of hospital stay in patients presenting through the emergency room. Scand J Trauma Resusc Emerg Med. 2019;27(1):54. https://doi.org/10.1186/s13049-019-0629-z.

Article Google Scholar

Haug CJ, Drazen JM. Artificial Intelligence and Machine Learning in Clinical Medicine, 2023. N Engl J Med. 2023;388(13):12018. https://doi.org/10.1056/NEJMra2302038.

Article Google Scholar

Abubaker Bagabir S, Ibrahim NK, Abubaker Bagabir H, Hashem Ateeq R. Covid-19 and Artificial Intelligence: genome sequencing, drug development and vaccine discovery. J Infect Public Health. 2022;15(2):28996. https://doi.org/10.1016/j.jiph.2022.01.011.

Article Google Scholar

Pudjihartono N, Fadason T, Kempa-Liehr AW, OSullivan JM. A review of feature selection methods for machine learning-based Disease Risk Prediction. Front Bioinform. 2022;2:927312. https://doi.org/10.3389/fbinf.2022.927312. Published 2022 Jun 27.

Article Google Scholar

Widen E, Raben TG, Lello L, Hsu SDH. Machine learning prediction of biomarkers from SNPs and of Disease risk from biomarkers in the UK Biobank. Genes (Basel). 2021;12(7):991. https://doi.org/10.3390/genes12070991. Published 2021 Jun 29.

Article Google Scholar

Wang H, Avillach P. Diagnostic classification and prognostic prediction using common genetic variants in autism spectrum disorder: genotype-based Deep Learning. JMIR Med Inf. 2021;9(4). https://doi.org/10.2196/24754.

Sorlie T, Perou CM, Tibshirani R, Aas T, Geisler S, Johnsen H, et al. Gene expression patterns of breast carcinomas distinguish tumor subclasses with clinical implications. Proc Natl Acad Sci. 2001;98:1086974. https://doi.org/10.1073/pnas.191367098.

Article Google Scholar

Yersal O. Biological subtypes of breast cancer: prognostic and therapeutic implications. World J Clin Oncol. 2014;5(3):41224. https://doi.org/10.5306/wjco.v5.i3.412.

Article Google Scholar

eek JT, Scharpf RB, Bravo HC, Simcha D, Langmead B, Johnson WE, et al. Tackling the widespread and critical impact of batch effects in high-throughput data. Nat Rev Genet. 2010;11:7339. https://doi.org/10.1038/nrg2825.

Article Google Scholar

Blanco-Gonzlez A, Cabezn A, Seco-Gonzlez A, Conde-Torres D, Antelo-Riveiro P, Pieiro , et al. The role of AI in drug discovery: Challenges, opportunities, and strategies. Pharmaceuticals. 2023;16(6):891. https://doi.org/10.3390/ph16060891.

Article Google Scholar

Tran TTV, Surya Wibowo A, Tayara H, Chong KT. Artificial Intelligence in Drug Toxicity Prediction: recent advances, Challenges, and future perspectives. J Chem Inf Model. 2023;63(9):262843. https://doi.org/10.1021/acs.jcim.3c00200.

Article Google Scholar

Tran TTV, Tayara H, Chong KT. Artificial Intelligence in Drug Metabolism and Excretion Prediction: recent advances, Challenges, and future perspectives. Pharmaceutics. 2023;15(4):1260. https://doi.org/10.3390/pharmaceutics15041260.

Article Google Scholar

Guedj M, Swindle J, Hamon A, Hubert S, Desvaux E, Laplume J, et al. Industrializing AI-powered drug discovery: Lessons learned from the patrimony computing platform. Expert Opin Drug Discov. 2022;17(8):81524. https://doi.org/10.1080/17460441.2022.2095368.

Article Google Scholar

Ahmed F, Kang IS, Kim KH, Asif A, Rahim CS, Samantasinghar A, et al. Drug repurposing for viral cancers: a paradigm of machine learning, Deep Learning, and virtual screening-based approaches. J Med Virol. 2023;95(4). https://doi.org/10.1002/jmv.28693.

Singh DP, Kaushik B. A systematic literature review for the prediction of anticancer drug response using various machine-learning and deep-learning techniques. Chem Biol Drug Des. 2023;101(1):17594. https://doi.org/10.1111/cbdd.14164.

Article Google Scholar

Quazi S. Artificial intelligence and machine learning in precision and genomic medicine. Med Oncol. 2022;39(2):120. https://doi.org/10.1007/s12032-022-01711-1.

Article Google Scholar

Subramanian M, Wojtusciszyn A, Favre L, Boughorbel S, Shan J, Letaief KB, et al. Precision medicine in the era of artificial intelligence: implications in chronic disease management. J Transl Med. 2020;18(1):472. https://doi.org/10.1186/s12967-020-02658-5.

Article Google Scholar

Johnson KB, Wei WQ, Weeraratne D, Frisse ME, Misulis K, Rhee K, et al. Precision Medicine, AI, and the future of Personalized Health Care. Clin Transl Sci. 2021;14(1):8693. https://doi.org/10.1111/cts.12884.

Article Google Scholar

Pulley JM, Denny JC, Peterson JF, Bernard GR, Vnencak-Jones CL, Ramirez AH, et al. Operational implementation of prospective genotyping for personalized medicine: the design of the Vanderbilt PREDICT project. Clin Pharmacol Ther. 2012;92(1):8795. https://doi.org/10.1038/clpt.2011.371.

Article Google Scholar

Huang C, Clayton EA, Matyunina LV, McDonald LD, Benigno BB, Vannberg F, et al. Machine learning predicts individual cancer patient responses to therapeutic drugs with high accuracy. Sci Rep. 2018;8(1):16444. https://doi.org/10.1038/s41598-018-34753-5.

Article Google Scholar

Sheu YH, Magdamo C, Miller M, Das S, Blacker D, Smoller JW. AI-assisted prediction of differential response to antidepressant classes using electronic health records. npj Digit Med. 2023;6:73. https://doi.org/10.1038/s41746-023-00817-8.

Article Google Scholar

Martin GL, Jouganous J, Savidan R, Bellec A, Goehrs C, Benkebil M, et al. Validation of Artificial Intelligence to support the automatic coding of patient adverse drug reaction reports, using Nationwide Pharmacovigilance Data. Drug Saf. 2022;45(5):53548. https://doi.org/10.1007/s40264-022-01153-8.

Article Google Scholar

Lee H, Kim HJ, Chang HW, Kim DJ, Mo J, Kim JE. Development of a system to support warfarin dose decisions using deep neural networks. Sci Rep. 2021;11(1):14745. Published 2021 Jul 20. https://doi.org/10.1038/s41598-021-94305-2.

Blasiak A, Truong A, Jeit W, Tan L, Kumar KS, Tan SB, et al. PRECISE CURATE.AI: a prospective feasibility trial to dynamically modulate personalized chemotherapy dose with artificial intelligence. J Clin Oncol. 2022;40(16suppl):15744. https://doi.org/10.1200/JCO.2022.40.16_suppl.1574.

Article Google Scholar

Go here to see the original:
Revolutionizing healthcare: the role of artificial intelligence in clinical ... - BMC Medical Education

Read More..

CFPB Issues Guidance on Credit Denials by Lenders Using Artificial … – Consumer Financial Protection Bureau

WASHINGTON, D.C. Today, the Consumer Financial Protection Bureau (CFPB) issued guidance about certain legal requirements that lenders must adhere to when using artificial intelligence and other complex models. The guidance describes how lenders must use specific and accurate reasons when taking adverse actions against consumers. This means that creditors cannot simply use CFPB sample adverse action forms and checklists if they do not reflect the actual reason for the denial of credit or a change of credit conditions. This requirement is especially important with the growth of advanced algorithms and personal consumer data in credit underwriting. Explaining the reasons for adverse actions help improve consumers chances for future credit, and protect consumers from illegal discrimination.

Technology marketed as artificial intelligence is expanding the data used for lending decisions, and also growing the list of potential reasons for why credit is denied, said CFPB Director Rohit Chopra. Creditors must be able to specifically explain their reasons for denial. There is no special exemption for artificial intelligence.

In todays marketplace, creditors are increasingly using complex algorithms, marketed as artificial intelligence, and other predictive decision-making technologies in their underwriting models. Creditors often feed these complex algorithms with large datasets, sometimes including data that may be harvested from consumer surveillance. As a result, a consumer may be denied credit for reasons they may not consider particularly relevant to their finances. Despite the potentially expansive list of reasons for adverse credit actions, some creditors may inappropriately rely on a checklist of reasons provided in CFPB sample forms. However, the Equal Credit Opportunity Act does not allow creditors to simply conduct check-the-box exercises when delivering notices of adverse action if doing so fails to accurately inform consumers why adverse actions were taken.

In fact, the CFPB confirmed in a circular from last year that the Equal Credit Opportunity Act requires creditors to explain the specific reasons for taking adverse actions. This requirement remains even if those companies use complex algorithms and black-box credit models that make it difficult to identify those reasons. Todays guidance expands on last years circular by explaining that sample adverse action checklists should not be considered exhaustive, nor do they automatically cover a creditors legal requirements.

Specifically, todays guidance explains that even for adverse decisions made by complex algorithms, creditors must provide accurate and specific reasons. Generally, creditors cannot state the reasons for adverse actions by pointing to a broad bucket. For instance, if a creditor decides to lower the limit on a consumers credit line based on behavioral spending data, the explanation would likely need to provide more details about the specific negative behaviors that led to the reduction beyond a general reason like purchasing history.

Creditors that simply select the closest factors from the checklist of sample reasons are not in compliance with the law if those reasons do not sufficiently reflect the actual reason for the action taken. Creditors must disclose the specific reasons, even if consumers may be surprised, upset, or angered to learn their credit applications were being graded on data that may not intuitively relate to their finances.

In addition to todays and last years circulars, the CFPB has issued an advisory opinion that consumer financial protection law requires lenders to provide adverse action notices to borrowers when changes are made to their existing credit.

The CFPB has made the intersection of fair lending and technology a priority. For instance, as the demand for digital, algorithmic scoring of prospective tenants has increased among corporate landlords, the CFPB reminded landlords that prospective tenants must receive adverse action notices when denied housing. The CFPB also has joined with other federal agencies to issue a proposed rule on automated valuation models, and is actively working to ensure that black-box models do not lead to acts of digital redlining in the mortgage market.

Read Consumer Financial Protection Circular 2023-03, Adverse action notification requirements and the proper use of the CFPBs sample forms provided in Regulation B.

Consumers can submit complaints about financial products and services by visiting the CFPBs website or by calling (855) 411-CFPB (2372).

Employees who believe their companies have violated federal consumer financial protection laws are encouraged to send information about what they know to whistleblower@cfpb.gov. Workers in technical fields, including those that design, develop, and implement artificial intelligence, may also report potential misconduct to the CFPB. To learn more, visit the CFPBs website.

The Consumer Financial Protection Bureau is a 21st century agency that implements and enforces Federal consumer financial law and ensures that markets for consumer financial products are fair, transparent, and competitive. For more information, visit consumerfinance.gov.

Go here to see the original:
CFPB Issues Guidance on Credit Denials by Lenders Using Artificial ... - Consumer Financial Protection Bureau

Read More..

Artificial Intelligence Added as Honor Code Violation The Oberlin … – The Oberlin Review

On May 17, Oberlin changed the schools Honor Code Charter to include the use of artificial intelligence as a punishable offense under the cheating section of the Code. The Honor Code Charter is reviewed by the Honor Committee every three years.

The amended Charter prohibits the use of artificial intelligence software or other related programs to create or assist with assignments on the behalf of a student unless otherwise specified by the faculty member and/or the Office of Disability & Access.

The decision comes in the wake of questions surrounding the threat to academic integrity posed by generative AI chatbots, such as OpenAIs ChatGPT.

AI cases could have been pursued under the old Charter, Associate Dean of Students Thom Julian said, and the new clause simply acts as a clarification rather than a change in policy.

The school felt it necessary to add to the Honor Code, Julian said. We started to see issues come up within the classroom last year, and [at] a lot of our peer institutions, I saw that they were also having some similar issues. We just wanted to be able to provide really clear guidance around it, not just for faculty, but for students, so everyone has clear expectations within the classroom.

The Student Honor Committee and liaison made most of the edits, according to College second-year Kash Radocha, a member and panelist of the SHC. The proposed changes were then reviewed by peer institutions, and a legal review was conducted. After it attained approval from the SHC, it was also approved by the Faculty Honor Committee and General Faculty Council. The new changes came into effect for the fall 2023 semester after going through the Student Senate and the General Faculty.

The addition of AI is only one of the changes that were made to the Charter. Another revision allows both claimants and respondents to appeal decisions, unlike the earlier system where only respondents were allowed to. Additionally, if a student is found not guilty by SHC, the faculty member is recommended to grade the assignment in accordance to its merit and note the reported violation. The College has also increased the maximum number of SHC seats from 15 to 20 and changed the process of removal of a member.

The amendment of the Charter was a highly collaborative process, Radocha said. The process for amending the Honor Code Charter is not a light one, and multiple checks and balances are in place to ensure the changes are valid and widely accepted.

Professors have also had to grapple with the consequences of their students knowledge of AI. Assistant Professor of Philosophy Amy Berg, who teaches a class on Ethics and Technology, said that when she taught the class in the spring of 2023, her students were already familiar with large language models, such as ChatGPT. However, since ChatGPT had only been released a few months prior to that classs start, she had not been able to add much to the curriculum.

[T]he academic or philosophical or ethical work on ChatGPT just has not caught up to its use, she said. So, I know when I teach the class next time, Ill have to spend a lot more time on AI and, specifically, on whatever forms of AI are current at the time.

Some members of the faculty have begun to add ChatGPT to their curriculums in creative ways to allow students to understand what its capabilities are. One example is Assistant Professor of Politics Joshua Freedman, who spoke about a ChatGPT assignment he gave to students in spring 2023.

I thought that for both my own sake and the students that we should use it to figure out what its capable of, Freedman said. And so, I had students ask ChatGPT a question of relevance to the course and then, in a series of follow-up questions, I had them dig deeper and deeper, keep pushing the AI to give them the best possible answer. To see how well does this large language model answer the questions that were trying to answer in this course.

Professor Freedman said that he likes the idea of a default ban while giving faculty the power to allow the use of AI for certain assignments. While Professor Berg has not yet changed the structure of her assignments to include ChatGPT, she does think that the way classes are conducted will change.

I would expect that many professors, maybe me included, will move to oral assignments, in-class assignments, more in-class writing, less done out of class, because were concerned that, for various reasons, people will take shortcuts, Professor Berg said. I think, also, some professors will look for ways to integrate ChatGPT into the writing or thinking process, and there are good reasons to do that, too.

According to Radocha, the addition to the Honor Charter allows the school to better plan for the future of AI.

It is a precautionary measure for us to include it within the Charter now, so that by the time we review it again in 2026, we can amend the current AI clause based on how we have experienced it via cases in that timespan of three years, Radocha said.

Originally posted here:
Artificial Intelligence Added as Honor Code Violation The Oberlin ... - The Oberlin Review

Read More..

NYU and KAIST Launch Major New Initiative on Artificial Intelligence … – New York University

NYU President Linda G. Mills and Korean Advanced Institute of Science and Technology (KAIST) President Kwang Hyung Lee were joined by Sung Bae Jun, president of the Institute of Information & Communications Technology Planning & Evaluation, and Joon Hee Joh, president of the Korea Software Industry Association in signing an agreement to collaborate on a major Artificial Intelligence (AI) and digital technologies research effort.

Senior public officialsincluding the President of the Republic of Korea, Yoon Suk Yeol; Koreas Minister of Science and Information and Communications Technology Jong-Ho Lee; the Director of the US National Science Foundation Sethuraman Panchanathan; NYC Deputy Mayor for Housing, Economic Development, and Workforce Maria Torres-Springerand Turing Prize-winning AI scientist and NYU faculty member Yann LeCun convened at NYUs Greenwich Village campus to mark the new partnership and launch a Digital Vision Forum with leading thinkers on AI and digital governance from around the world. Senator Charles Schumer participated in the proceedings via video. The event also importantly marked the anniversary of the first Digital Vision Forum, which was held precisely a year ago at NYU to initiate the partnership between NYU, the Republic of Korea, and KAIST, an event that also featured remarks by President Yoon.

Todays historic event positions NYU, New York City, and Korea at the forefront of the global science and tech ecosystem of the future. NYU President Mills said, We are honored to bring together leaders in government, academia, and industry to commemorate a vital and historic partnership that will propel scholarship and advancements in technology. We are thrilled by this partnership, which exemplifies both NYUs commitment to global learning and research as well as our role in fueling the growth of New York Citys tech, science, and innovation sector.

Senator Schumer said, I want to commend President Yoon and my friend, NYU President Linda Mills, on todays announcement of a historic joint research program between NYU and the South Korean government. The partnership is a partnership made in heaven: NYU, one of the nations leading research institutions, and South Korea, one of Americas strongest allies and partners, and also a leader in research and science, collaborating on one of the most important issues of our time, artificial intelligence.

NSF Director Panchanathan said, As our two presidents affirmed at the State Visit in April, the U.S. and the Republic of Korea have a truly global alliance that champions democratic principles, enriches economic cooperation, and empowers technological advances. NSF shares in President Yoon's conviction that human values are important in the development of new technology. Values including openness and transparency, and the creation of AI tools that are responsible and ethical, without bias, and protect the security and privacy of our people.

The research effortthe ROK Institutions-NYU AI and Digital Partnershipaims to conduct world-class research in AI and digital technologies. The partnership is expected to be headquartered at NYU.

Todays event marks the expansion of NYUs previously announced partnership and strengthens the Universitys links to Korea and its institutions. The event included a wide-ranging panel discussion about AI and digital governance by prominent scholars in the field. The panel was moderated by Professor Matthew Liao, director of the Center for Bioethics at NYUs School of Global Public Health, the panelists included:

Professor Kyung-hyun Cho, Deputy Director for NYU Center for Data Science & Courant Institute

Professor Luciano Floridi, Founding Director of the Digital Ethics Center, Yale University

Professor Urs Gasser, Rector of the Hochschule fur Politik, Technical University of Munich

Professor Shannon Vallor, Baillie Gifford Professor in the Ethics of Data and Artificial Intelligence, University of Edinburgh

Professor Stefaan Verhulst, Co-Founder & Director of GovLab's Data Program, NYU Tandon School of Engineering, and

Professor Jong Chul Ye, Director of Promotion Council for Digital Health, KAIST

For NYUs new president, Linda G. Mills, this is the third major global agreement she has signed this month. She earlier signed a research partnership agreement with IIT Kanpur in India (an agreement cited by US President Joe Biden and Indian Prime Minister Narendra Modi in their joint statement) and renewed a partnership agreement between NYUs Shanghai campus and East China Normal University. Building on NYUs unrivaled global presence, strength, and character is expected to be a major priority of her administration.

About NYU

Founded in 1831, NYU is one of the worlds foremost research universities (with more than $1 billion per year in research expenditures) and is a member of the selective Association of American Universities. NYU has degree-granting university campuses in New York, Abu Dhabi, and Shanghai; has 12 other global academic sites, including London, Paris, Florence, Tel Aviv, Buenos Aires, and Accra; and both sends more students to study abroad and educates more international students than any other U.S. college or university. Through its numerous schools and colleges, NYU is a leader in conducting research and providing education in the arts and sciences, law, medicine, engineering, business, dentistry, education, nursing, the cinematic and performing arts, music and studio arts, public administration, social work, and professional studies, among other areas.

About KAIST

Since KAIST was established in 1971, KAIST and its alumni have been the gateway to advanced science and technology, innovation, and entrepreneurship and have made a significant contribution to creating the dynamic economy of todays Korea. KAIST has now emerged as one of the most innovative universities; it ranked 1st among the Most Innovative Universities in Asia from 2016 to 2018 and 11th in the Worlds Most Innovative Universities in 2018 by Thomson Reuters. KAIST was named as one of the Global 100 Innovators in 2021 by Clarivate, the only university listed. QS ranked KAIST the 20th-best university in engineering and technology in 2022, and the Nature Indexs Young University Ranking placed KAIST 4th in the world. KAIST continues to spearhead innovation and lead the advance of science and technology in Korea and beyond, and aims to contribute to the development of new dynamic engines of growth and innovation through collaboration with NYU to foster more future-oriented, creative global talents, young researchers, and entrepreneurs in the creative environment of New York City.

Visit link:
NYU and KAIST Launch Major New Initiative on Artificial Intelligence ... - New York University

Read More..

IBM Commits To Training Two Million in Artificial Intelligence – Facility Executive Magazine

Adobe Stock/kasto

To help close the global artificial intelligence (AI) skills gap, IBM announced a commitment to train two million learners in AI by the end of 2026, with a focus on underrepresented communities.

To achieve this goal at a global scale, IBM is expanding AI education collaborations with universities globally, collaborating with partners to deliver AI training to adult learners, and launching new generative AI coursework throughIBM SkillsBuild. This will expand upon IBMs existing programs and career-building platforms to offer enhanced access to AI education and in-demand technical roles.

According to a recentglobal studyconducted by IBM Institute of Business Value, surveyed executives estimate that implementing AI and automation will require 40% of their workforce to reskill over the next three years, mostly those in entry-level positions. This further reinforces that generative AI is creating a demand for new roles and skills.

IBM is collaborating with universitiesat a global level to build capacity around AI leveraging IBMs network of experts. University faculty will have access to IBM-led training such as lectures and immersive skilling experiences, including certificates upon completion. Also, IBM will provide courseware for faculty to use in the classroom, including self-directed AI learning paths. In addition to faculty training, IBM will offer students flexible and adaptable resources, including free, online courses on generative AI and Red Hat open source technologies.

Through IBM SkillsBuild, learners across the world can benefit from AI education developed by IBM experts to provide the latest in cutting edge technology developments. IBM SkillsBuild already offers free coursework in AI fundamentals, chatbots, and crucial topics such as AI ethics. The new generative AI roadmap includes coursework and enhanced features.

These courses are all completely free and available to learners around the world. At course completion, participants will be able to earn IBM-branded digital credentials that are recognized by potential employers.

This new effort builds on IBMs existing commitment to skill 30 million people by 2030, and is intended to address the urgent needs facing todays workforce. Since 2021, over 7 million learners have enrolled in IBM courses. Worldwide, the skills gap presents a major obstacle to the successful application of AI and digitalization, across industries, and beyond technology experts. This requires a comprehensive world view to be developed and implemented. IBMs legacy of investing in the future of work includes making free online learning widely available, with clear pathways to employment, and a focus on historically underrepresented communities in tech, where the skills gap is wider.

View post:
IBM Commits To Training Two Million in Artificial Intelligence - Facility Executive Magazine

Read More..

When Artificial Intelligence Gets It Wrong – Innocence Project

Porcha Woodruff was eight months pregnant when she was arrested for carjacking. The Detroit police used facial recognition technology to run an image of the carjacking suspect through a mugshot database, and Ms. Woodruffs photo was among those returned.

Ms. Woodruff, an aesthetician and nursing student who was preparing her two daughters for school, was shocked when officers told her that she was being arrested for a crime she did not commit. She was questioned over the course of 11 hours at the Detroit Detention Center.

A month later, the prosecutor dismissed the case against her based on insufficient evidence.

Ms. Woodruffs ordeal demonstrates the very real risk that cutting-edge artificial intelligence-based technology like the facial recognition software at issue in her case presents to innocent people, especially when such technology is neither rigorously tested nor regulated before it is deployed.

Time and again, facial recognition technology gets it wrong, as it did in Ms. Woodruffs case. Although its accuracy has improved over recent years, this technology still relies heavily on vast quantities of information that it is incapable of assessing for reliability. And, in many cases, that information is biased.

In 2016, Georgetown Universitys Center on Privacy & Technology noted that at least 26 states allow police officers to run or request to have facial recognition searches run against their drivers license and ID databases. Based on this figure, the center estimated that one in two American adults has their image stored in a law enforcement facial recognition network. Furthermore, given the disproportionate rate at which African Americans are subject to arrest, the center found that facial recognition systems that rely on mug shot databases are likely to include an equally disproportionate number of African Americans.

More disturbingly, facial recognition software is significantly less reliable for Black and Asian people, who, according to a study by the National Institute of Standards and Technology, were 10 to 100 times more likely to be misidentified than white people. The institute, along with other independent studies, found that the systems algorithms struggled to distinguish between facial structures and darker skin tones.

The use of such biased technology has had real-world consequences for innocent people throughout the country. To date, six people that we know of have reported being falsely accused of a crime following a facial recognition match all six were Black. Three of those who were falsely accused in Detroit have filed lawsuits, one of which urges the city to gather more evidence in cases involving facial recognition searches and to end the facial recognition to line-up pipeline.

Former Detroit Police Chief James Craig acknowledged that if the citys officers were to use facial recognition by itself, it would yield misidentifications 96% of the time.

Even when an AI-powered technology is properly tested, the risks of a wrongful arrest and wrongful conviction remain and are exacerbated by these new tools.

Thats because when AI identifies a suspect, it can create a powerful, unconscious bias against the technology-identified person that hardens the focus of an investigation away from other suspects.

Indeed, such technology-induced tunnel vision has already had damaging ramifications.

For example, in 2021, Michael Williams was jailed in Chicago for the first-degree murder of Safarian Herring based on a ShotSpotter alert that police received. Although ShotSpotter purports to triangulate a gunshots location through an AI algorithm and a network of microphones, an investigation by the Associated Press found that the system is deeply statistically unreliable because it can frequently miss live gunfire or mistake other sounds for gunshots. Still, based on the alert and a noiseless security video that showed a car driving through an intersection, Mr. Williams was arrested and jailed for nearly a year even though police and prosecutors never established a motive explaining his alleged involvement, had no witnesses to the murder, and found no physical evidence tying him to the crime. According to a federal lawsuit later filed by Mr. Williams, investigators also ignored other leads, including reports that another person had previously attempted to shoot Mr. Herring. Mr. Williams spent nearly a year in jail before the case against him was dismissed.

Cases like Ms. Woodruffs and Mr. Williams highlight the dangers of law enforcements overreliance on AI technology, including an unfounded belief that such technology is a fair and objective processor of data.

Absent comprehensive testing or oversight, the introduction of additional AI-driven technology will only increase the risk of wrongful conviction and may displace the effective policing strategies, such as community engagement and relationship-building, that we know can reduce wrongful arrests.

We enter this fall with a number of significant victories under our belt including 7 exonerations since the start of the year.Through the cases of people like Rosa Jimenez and Leonard Mack, weve leveraged significant advances in DNA technology and other sciences to free innocent people from prison.

We are committed to countering the harmful effects of emerging technologies, advocating for research on AIs reliability and validity, and urging consideration of the ethical, legal, social and racial justice implications of its use.

We support a moratorium on the use of facial recognition technology in the criminal legal system until such time as research establishes its validity and impacted communities are given the opportunity to weigh in on the scope of its implementation.

We are pushing for more transparency around so-called black box technologies technologies whose inner workings are hidden from users.

We believe that any law enforcement reliance on AI technology in a criminal case must be immediately disclosed to the defense and subjected to rigorous adversarial testing in the courtroom.

Building on President Bidens executive order directing the National Academy of Sciences to study certain AI-based technologies that can lead to wrongful convictions, we are also collaborating with various partners to collect the necessary data to enact reforms.

And, finally, we encourage Congress to make explicit the ways in which it will regulate investigative technologies to protect personal data.

It is only through these efforts can we protect innocent people from further risk of wrongful conviction in todays digital age.

With gratitude,

Christina SwarnsExecutive Director, Innocence Project

See the rest here:
When Artificial Intelligence Gets It Wrong - Innocence Project

Read More..

Dartmouth to Host Conference on Artificial Intelligence – Dartmouth News

An inauguralDartmouth AI Conferenceto be held on Sept. 29 will honor the institutions legacy as the birthplace for artificial intelligence while also discussing the rapid advancements and challenges permeating the current AI landscape.

Spearheaded by theTuck School of Businessand theTuck Center for Digital Strategies, the conference will convene industry stalwarts from diverse sectors including banking, health care, technology, venture capital, and consulting.

Patrick Wheeler, executive director of the Tuck Center for Digital Strategies, emphasized the timeliness and relevance of the discussions slated for the conference. AI application is evolving rapidly across both academia and the business sector. Dartmouth stands at the crossroads of this evolution, fostering AI developments that are technically sound, ethically responsible, and practically beneficial for society, Wheeler says.

The one-day conference, to be held at Tucks Raether McLaughlin Atrium, offers a rich platform for students, faculty, and staff to interact with leaders and experts steering the current innovations in the field. Alumni can participate virtually, ensuring the Dartmouth community worldwide can engage in the event.

An impressive roster of speakers will be featured at the event, including:

A central theme of the conference will be the responsible and ethical creation and utilization of AI. Dartmouth, with its rich interdisciplinary tradition, is uniquely positioned to lead discussions that meld deep technical expertise with a liberal arts approach to the ethical dimensions inherent in AI development.

This event is a great opportunity to synthesize and showcase all the innovations in this exciting and dynamic field happening in different pockets around campus to audiences within and outside Dartmouth, saysLaMar Bunts, chief transformation officer.

Dartmouth has a long history with AI. The1956 Dartmouth Summer Research Project on Artificial Intelligence is widely seen to be the foundational eventthat kickstarted research in artificial intelligence.

Follow this link:
Dartmouth to Host Conference on Artificial Intelligence - Dartmouth News

Read More..

Artificial Intelligence: Key Business and Legal Issues to Consider – Sidley Austin LLP

The rapid growth of artificial intelligence (AI) development and adoption, particularly generative AI and machine learning applications, has captured the attention of business leaders, academics, investors, and regulators worldwide. AI is also requiring companies to confront an evolving host of questions across different areas of law, including privacy, cybersecurity, commercial and intellectual property transactions, intellectual property ownership and rights, products liability, labor and employment, insurance, consumer protection, corporate governance, national security, ethics, government policy, and regulation.

Below, we outline questions that companies and their boards should consider as they navigate this ever-evolving technological innovation. Many of these questions are industry-agnostic, but all companies must also address challenges specific to the industry and regulatory environment in which they operate.

Sidley has a multi-disciplinary AI industry team focused on providing our clients with practical and actionable guidance on the wide range of regulatory, transactional, and litigation issues companies face in evaluating, leveraging, and mitigating risk from AI.

To discuss the business and legal implications for your company, please contact one of the individuals below or one of the dedicated Sidley lawyers with whom you work.

Updated: September 19, 2023

In light of the evolving situation, we are reviewing and frequently updating information provided in the PDF.

Read the original:
Artificial Intelligence: Key Business and Legal Issues to Consider - Sidley Austin LLP

Read More..