Page 970«..1020..969970971972..980990..»

AI and the law: the challenges of making sure machine learning … – Scottish Business News

By Sinead Machin is a Senior Associate at Complete Clarity Solicitors and Simplicity Legal

EVERY advance in the dissemination of human knowledge from the printing press to newspapers, television and the internet has initially been seen as much as a threat as an opportunity. But few new systems have been greeted with such suspicion as AI.

Largely because of fears of machine superiority and loss of human jobs and functions to Artificial Intelligence, debate about its impact on current and future society has verged on the dramatic and, in some cases, the hysterical.

But one thing is beyond dispute AI is here, and it is here to stay. And the only rational response is to learn to live with it, understand its capabilities and its limitations and think very clearly about checks and balances to ensure a net benefit rather than an irreversible harm.

The impressive power of the technology, and particularly tools such as Chat GPT, has been exercising the minds of the legal profession around the world as it gets to grips with the practical, economic and ethical implications of AI.

There is no doubt that AI will become, if it has not already, an indispensable tool for coping with the immense amount of data which lawyers have to handle in complex cases, and some of the mundane processes which underpin the legal infrastructure.

Certainly, in high volume practices, machine learning and data analytics can be hugely beneficial in identifying and increasing the number of leads and prospects and SEO teams are seeing significant opportunities for business growth.

AI comes into its own in the field of case management, with its limitless capacity for examining massive volumes of data, finding patterns, and making predictions or choices using algorithms and statistical models.

This is creating much quicker and more streamlined case management, which clients are already coming to expect. In fact, it may soon become a recognised basis for complaint if the speed and efficiencies which are now possible are not achieved.

More troubling discussion is taking place around whether AI could carry out some of the tasks traditionally performed by lawyers, such as researching, preparing and presenting cases.

The pitfalls of this line of thinking were amply illustrated recently by the story of New York attorney Steven Schwartz, who used ChatGPT to write a legal brief. The chatbot not only completely fabricated the case law which he cited in court but reassured him repeatedly that the information was accurate. The judge in the case was singularly unimpressed.

Lawyers must be aware of the risks of using AI bots in terms of client confidentiality. If they fed client-specific information into a bot such as ChatGPT, it would become the property of OpenAI, the bot developer, and could be disclosed in other cases.

Scots law, of course, has its own unique characteristics, of which AI bots at this stage would likely be unaware, leading them to rely on English and Welsh cases and precedents which would have limited relevance.

However, it is learning fast. Chat GPT3 scored in the lowest 10% in the US Bar exam, but the next version, GPT4, scored in the top 10%. It is conceivable that law-specific bots will be developed to concentrate solely on particular areas of expertise.

Master of the Rolls and Head of Civil Justice in England and Wales Sir Geoffrey Vos said recently (June 2023) that public trust may limit the use of AI in legal decisions, pointing to the emotional and human elements involved in areas such as family and criminal law.

He warned that while AI has the potential to be a valuable tool for predicting case outcomes and making informed decisions, it was not infallible and should be used in conjunction with human judgement and expertise.

He pointed out that ChatGPT itself said: Ultimately, legal decision-making involves a range of factors beyond just predicting the outcome of a case, including strategic and ethical considerations and client goals.

See the article here:
AI and the law: the challenges of making sure machine learning ... - Scottish Business News

Read More..

Using machine learning to predict surgical site infection | IDR – Dove Medical Press

Introduction

Surgical site infection (SSI)1 frequently develops postoperatively; this condition can be fatal for both surgeons and patients. Many factors are responsible for the infection of surgical incisions, including smoking status, diabetes, advanced age, hypoproteinemia, and internal fixation.2,3 In spinal surgery,4 SSI is associated with prominent morbidity, healthcare expenses owing to readmission and reoperation, and poor prognosis.5,6 Artificial intelligence is widely used in medical research, and the predictive effectiveness of machine learning is widely recognized. After achieving great success in various predictions, machine learning has attracted the attention of clinicians and medical researchers.7,8 In our previous studies, we constructed a machine learning prediction model and demonstrated good prediction ability.9,10

In this study, a machine learning model and a web-based prediction tool were developed to predict SSI in patients undergoing lumbar spinal surgery. Various machine learning algorithms were compared to identify the most effective approach.11,12 As a super data processing and calculation method, machine learning has considerable reliability in screening variables.13,14 However, the current prediction models based on machine learning mostly compare the effectiveness of different algorithms to select the best one.

Therefore, we aimed to select the ideal clinical variables using various machine learning algorithms and their intersection to build an ideal prediction model and perform internal verification. This prediction model might guide clinical diagnosis and prevention.

We obtained ethical approval from the Institutional Review Board of our institute (Approval No. 2022-E398-01). This retrospective study adheres to the principles outlined in the Declaration of Helsinki. A total of 4019 patients who underwent lumbar internal fixation surgery at our institute from June 2012 to February 2021 were included in the study. Clinical data such as age, sex, diabetes, Modic changes, anesthesia score, operation status, and serological and imaging indexes of patients were collected for statistical analysis. Operation status included the following parameters: use of antibiotics during the operation, operation time, anesthesia time, vertebral body number spanned, screw number, and intraoperative blood transfusion. The serological parameters were glucose, WBC, hemoglobin (Hb), PLT, ESR, and albumin. Imaging indexes were skin-to-lamina thickness and sebum thickness (In this study, sebum thickness at three distinct locations of lumbar surgical incisions was measured using CT examination as the measurement method. The average value derived from these measurements was considered as the value included in our study). Patients with incomplete information or those that did not meet the diagnostic criteria15 were excluded. Finally, 54 and 1273 patients were grouped into the SSI and normal lumbar fixation groups (Figure 1). Through random grouping, the data were classified into the test and verification groups (Table 1).

Table 1 The Distribution of Each Variable That Meets the Screening Condition

Figure 1 Data filtering and grouping.

R software (version 4.2.1; https://www.R-project.org) was used for statistical analyses. First, the filtered data were randomized into the test and verification groups. Second, in the test group, specific variables were screened via logistic regression analysis, Lasso regression analysis, support vector machine (SVM), and random forest. Specific variables acquired using these four methods were intersected, and a dynamic model was constructed. ROC and calibration curves were constructed to assess model performance. Finally, using the verification group, model performance was verified internally using ROC and calibration curves.

Single-factor logistic regression analysis was performed to select variables with p < 0.05. Then, multi-factor logistic regression analysis was performed; p < 0.05 was set as the threshold to select the predictive variables of this method.

Lasso regression analysis was performed, and a model was developed as a contraction approach to select risk factors from various variables as well as optimal predicting features based on SSI case data. LASSO regression and visualized analyses were conducted using the R glmnet package.

SVM recursive feature elimination (SVM-RFE) has been developed as an efficient approach under machine learning.To predict SSI, we developed an SVM-RFE model using the rms package. Data were analyzed via tenfold cross-validation, followed by the acquisition of an output vector feature index and variable sorting in the descending order of usefulness.

To construct the random forest model, the R random forest package was used to select variables, perform calculations, and visualize relative variable importance. %IncMSE indicates an increase in mean squared error. Random values were assigned to each variable to assess the importance of predicting variables. The models prediction error increased when a predicting variable of greater importance had its value randomly replaced. Consequently, a higher value indicated a higher level of variable importance. IncNodePurity indicates an increase in node purity and can be calculated as the sum of squares of residual errors; it indicates how one variable affects observed value heterogeneity in every node within the classification tree, with a higher value indicating higher variable importance.

We selected IncNodePurity as the indicating factors to judge whether a predictive variable was important. We identified the value with the highest importance to be the optimal predictive variable via tenfold cross-validation under five iterations.

The abovementioned methods were used to screen the predictive variables. The same variables were obtained using a Wayne diagram. After constructing a dynamic prediction model with common variables, ROC and calibration curves were constructed to evaluate model prediction performance; its effectiveness was verified using the verification group.

In total, the data of 1327 patients meeting the inclusion criteria were collected: age, sex, diabetes, Modic changes, anesthesia score, antibiotic use during the operation, operation time, anesthesia time, vertebral body number spanned, screw number, intraoperative blood transfusion, WBC, glucose, PLT, Hb, ESR, albumin, skin-to-lamina thickness, and sebum thickness. These variables were randomly divided into the test and verification groups. The distribution characteristics between the two groups are presented in Table 1. Additionally, Supplementary Figure 1 illustrates the correlations among the different variables in the test group.

Univariate logistic regression analysis revealed a statistically significant difference between p-values < 0.05, and the screened variables were age, diabetes, Modic changes, anesthesia time, vertebral body number spanned, screw number, blood transfusion, WBC, glucose, albumin, ESR, Hb, and sebum thickness. Multivariate logistic regression analysis revealed a statistically significant difference between p-values < 0.05; the screened variables were blood transfusion, glucose, Modic changes, Hb, vertebral body number spanned, and sebum thickness. Table 2 displays the results of the logistic regression analysis.

Table 2 Results After Logistic Regression Analysis

Results for Lasso regression analysis of dependent variables are shown in Supplementary Figure 2A; 12 significant variables in patients with SSI were compared with those in patients with SSI (Supplementary Figure 2B).

Following SVM-RFE analysis, Supplementary Figure 3A illustrates that ten variables with the lowest error rate were selected as predictive factors. Each of these factors was found to be statistically significant. Variables with the highest importance were determined using the random forest algorithm IncNodePurity. Supplementary Figure 3B shows that the best regression effect was obtained by leaving the 10 variables with the highest importance after tenfold cross-validation.

Table 3 displays the variables with the highest importance selected via Lasso regression analysis, SVM-RFE, and random forest. The intersection of the results obtained using the four methods was determined using a Venn diagram (Figure 2). Four predictors were obtained: Hb, glucose, Modic change, and sebum thickness. We used these four predictors to build a prediction model (Figure 3).

Table 3 Risk Factors Screened by Three Machine Learning Algorithms

Figure 2 The intersection of variables screened by using logistic regression analysis, LASSO, random forest, and SVM-RFE.

Figure 3 Four independent risk factors were identified, including Modic change, sebum thickness, Hb, glucose, and a dynamic model was constructed. Categorical variables were visually represented using block plots, while the distribution of continuous variables was depicted through violin plots. Larger plots accommodated more variables for comprehensive visualization. The red marker on the graph indicates that the probability of postoperative surgical site infection (SSI) was found to be 85.4% when all four independent risk factors were at their respective values.

To verify model efficiency, ROC (Figure 4B) and calibration (Figure 4A) curves were constructed using the test group; the area under the ROC curve (AUC) was 0.988. Calibration curve analysis revealed favorable consistency of the nomogram-predicted values compared with real measurements. In addition, the C-index of the model was 0.9861 (95% CI 0.9810.994). Finally, we used the validation group for internal validation; the ROC and calibration curves are shown in Figures 4D and C, respectively. The AUC was 0.987, and calibration curve analysis revealed favorable consistency of the nomogram-predicted values compared with real measurements. The C-index was 0.982 (95% CI 0.9740.999).

Figure 4 (A and B) represent the calibration curve and ROC curve of the training group, respectively, where the area under the curve (AUC) is 0.988. (C and D) represent the calibration curve and ROC curve of the validation group, respectively, where the area under the curve (AUC) is 0.987.

In this study, we used machine learning algorithms and related data to develop an SSI prediction model according to various predictions. Three machine learning models were employed to filter variables, and their validity was assessed using the verification group. This strategy based on artificial intelligence has been adopted to help clinicians select early diagnostic approaches.13,16,17 The relationship between machine learning and medicine is extensive, involving the diagnosis and treatment of cancer, surgery, and internal diseases.1820 Machine learning includes imaging, metabolomics, proteomics, etc.;21,22 random forest, SVM, CNN, GBX, and other algorithms are a very small part of machine learning.23 We can diagnose and predict various diseases, including tumors, specific diseases, and inflammatory diseases, via machine learning.9,10,24

Many studies have assessed SSI risk factors after spinal surgery,2528 including the establishment of predictive models based on machine learning.12,29 In our study, we utilized a combination of logistic regression analysis and machine learning to identify common risk factors and develop a prediction model, which has not been accomplished in previous studies. Further, we identified four risk factors that were closely related to the occurrence of SSI: Modic change, sebum thickness, Hb, and glucose. The constructed prediction model has good predictive efficacy and visualization, further simplifying the clinicians judgment and intervention on SSI.

Modic changes cause the degeneration of the lumbar spine on imaging and are probably involved in the bodys immune response.30,31 Pradip et al found that Modic changes were chronic subclinical infection foci rather than degeneration markers alone.32 Ohtori et al reported that endplate abnormality is associated with TNF-induced axonal development and inflammation. This conclusion is drawn from the observation that patients with Modic Type 1 or 2 endplate changes on MRI exhibited a significantly higher presence of TNF immunoreactive cells and PGP 9.5 immunoreactive nerve fibers in the affected vertebral endplates compared to patients without any endplate abnormalities on MRI.33 In our study, we also determined Modic changes as a risk factor for SSI following lumbar surgery. Therefore, Modic changes are not only a manifestation of lumbar disc degeneration but also that of chronic inflammation and should hence receive added attention from clinicians.

Studies have shown that obesity is positively correlated with postoperative SSI occurrence.34,35 We found that sebum thickness, a critical factor for predicting the risk of postoperative SSI, was positively correlated with SSI occurrence. We obtained this result despite insufficient direct pathophysiological evidence for sebum thickness and SSI. As sebum thickness and obesity are often positively correlated, we believe the pathophysiological mechanism between sebum thickness and SSI is equivalent to the relationship between obesity and SSI.36,37 Preoperative fat reduction is instructive for SSI prevention.38

Hb content is often negatively correlated with SSI occurrence;39 we also confirmed this finding. Tissue growth at the incision site after surgery is inseparable from energy perfusion. Insufficient tissue blood perfusion is not conducive to tissue recovery and even leads to tissue necrosis.40,41 Anemia is closely associated with SSI development; however, it is worth noting that perioperative blood transfusion may also be an independent factor for predicting postoperative SSI.42 Moreover, glucose has been a focal point in research related to SSIs.4345 We found that preoperative blood glucose levels were positively correlated with SSI occurrence. Liu et al identified high preoperative serum glucose as an independent factor predicting SSI risk following posterior lumbar spinal surgery.46 Thus, spinal surgeons should pay attention to patients preoperative blood glucose levels and intervene in time to prevent SSI.

Given the strong predictive efficacy of the model developed in our study, spine doctors can anticipate the potential occurrence of SSIs in patients prior to surgery by considering factors such as Modic changes, sebum thickness, hemoglobin levels, and preoperative blood glucose. In cases where a high risk is identified, appropriate intervention measures can be implemented before surgery, such as stabilizing blood glucose, administering blood transfusions, and prophylactic antibiotic use. The goal is to mitigate the risk of postoperative SSIs, facilitate patients speedy recovery, and alleviate unnecessary financial burdens. Additionally, we identified intraoperative blood transfusion as a risk factor for outcomes using logistic regression analysis, Lasso regression analysis, and random forest techniques. This finding is noteworthy and warrants attention from healthcare providers and patients alike.

Although we used various screening methods and constructed a prediction model with good performance, our study has some limitations. First, there might be selection and subjective bias owing to the retrospective nature of the study. Second, we constructed the machine learning algorithm model based on data from a single center; as a result, this model might not be applicable to other centers and requires external verification. Third, additional data are warranted, which might improve the diagnostic effectiveness of our model.

In our study, we employed logistic regression analysis and machine learning to create a dynamic model with strong predictive capabilities for SSIs. This dynamic model can be a valuable tool for healthcare professionals and patients in clinical practice.

SSI, surgery site infection; CI, Confidence interval; AUC, Area under the curve; BMI, body mass index; ASA, American Society ofAnesthesiologists; OP-time, Operation time; AT, anesthesia time; WBC, white blood cell; Hb, hemoglobin; PLT, platelet; ESR, erythrocyte sedimentation rate.

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.

We confirm that all subjects and/or their legal guardians provided written informed consent for participation in this study. Prior approval of the study was obtained from the institutional ethical review board of The First Affiliated Hospital of Guangxi Medical University (Approval No. 2022-E39801). The study complies with the Declaration of Helsinki.

We would like to thank Dr. Xinli Zhan and Dr. Chong Liu for their efforts in this work.

All authors made a significant contribution to the work reported, whether that is in the conception, study design, execution, acquisition of data, analysis, and interpretation, or in all these areas; took part in drafting, revising or critically reviewing the article; gave final approval of the version to be published; have agreed on the journal to which the article has been submitted; and agree to be accountable for all aspects of the work.

The authors declare that they have no competing interests.

1. Seidelman J, Anderson DJ. Surgical site infections. Infect Dis Clin North Am. 2021;35(4):901929.

2. Ma T, Lu K, Song L, et al. Modifiable factors as current smoking, hypoalbumin, and elevated fasting blood glucose level increased the SSI risk following elderly hip fracture surgery. J Invest Surg. 2020;33(8):750758.

3. Skeie E, Koch AM, Harthug S, et al. A positive association between nutritional risk and the incidence of surgical site infections: a hospital-based register study. PLoS One. 2018;13(5):e0197344.

4. Zhou J, Wang R, Huo X, Xiong W, Kang L, Xue Y. Incidence of surgical site infection after spine surgery: a systematic review and meta-analysis. Spine. 2020;45(3):208216.

5. Strobel RM, Leonhardt M, Frster F, et al. The impact of surgical site infection-a cost analysis. Langenbecks Arch Surgery. 2022;407(2):819828.

6. McFarland A, Reilly J, Manoukian S, Mason H. The economic benefits of surgical site infection prevention in adults: a systematic review. J Hosp Infect. 2020;106(1):76101.

7. Sultan AS, Elgharib MA, Tavares T, Jessri M, Basile JR. The use of artificial intelligence, machine learning and deep learning in oncologic histopathology. J Oral Pathol Med. 2020;49(9):849856.

8. Sidey-Gibbons JAM, Sidey-Gibbons CJ. Machine learning in medicine: a practical introduction. BMC Med Res Methodol. 2019;19(1):64.

9. Zhu J, Lu Q, Liang T, et al. Development and validation of a machine learning-based nomogram for prediction of ankylosing spondylitis. Rheumatol Therapy. 2022;9(5):13771397.

10. Zhou C, Huang S, Liang T, et al. Machine learning-based clustering in cervical spondylotic myelopathy patients to identify heterogeneous clinical characteristics. Front Surgery. 2022;9:935656.

11. Liu WC, Ying H, Liao WJ, et al. Using preoperative and intraoperative factors to predict the risk of surgical site infections after lumbar spinal surgery: a machine learning-based study. World Neurosurg. 2022;162:e553e60.

12. Wang H, Fan T, Yang B, Lin Q, Li W, Yang M. Development and internal validation of supervised machine learning algorithms for predicting the risk of surgical site infection following minimally invasive transforaminal lumbar interbody fusion. Front med. 2021;8:771608.

13. Handelman GS, Kok HK, Chandra RV, Razavi AH, Lee MJ, Asadi H. eDoctor: machine learning and the future of medicine. J Intern Med. 2018;284(6):603619.

14. Cote MP, Lubowitz JH, Brand JC, Rossi MJ. Artificial intelligence, machine learning, and medicine: a little background goes a long way toward understanding. Arthroscopy. 2021;37(6):16991702.

15. Borchardt RA, Tzizik D. Update on surgical site infections: the new CDC guidelines. JAAPA. 2018;31(4):5254.

16. Lee D, Yoon SN. Application of Artificial Intelligence-Based Technologies in the Healthcare Industry: opportunities and Challenges. Int J Environ Res Public Health. 2021;18(1):567.

17. Forsting M. Machine learning will change medicine. J Nucl Med. 2017;58(3):357358.

18. Hashimoto DA, Witkowski E, Gao L, Meireles O, Rosman G. Artificial Intelligence in Anesthesiology: current Techniques, Clinical Applications, and Limitations. Anesthesiology. 2020;132(2):379394.

19. Kong J, Ha D, Lee J, et al. Network-based machine learning approach to predict immunotherapy response in cancer patients. Nat Commun. 2022;13(1):3703.

20. Groot OQ, Ogink PT, Lans A, et al. Machine learning prediction models in orthopedic surgery: a systematic review in transparent reporting. J Orthop Res. 2022;40(2):475483.

21. Harrison JH, Gilbertson JR, Hanna MG, et al. Introduction to artificial intelligence and machine learning for pathology. Arch Pathol Lab Med. 2021;145(10):12281254.

22. Staziaki PV, Wu D, Rayan JC, et al. Machine learning combining CT findings and clinical parameters improves prediction of length of stay and ICU admission in torso trauma. Eur Radiol. 2021;31(7):54345441.

23. Nayarisseri A, Khandelwal R, Tanwar P, et al. Artificial intelligence, big data and machine learning approaches in precision medicine & drug discovery. Curr Drug Targets. 2021;22(6):631655.

24. Tran KA, Kondrashova O, Bradley A, Williams ED, Pearson JV, Waddell N. Deep learning in cancer diagnosis, prognosis and treatment selection. Genome Med. 2021;13(1):152.

25. Chen L, Gan Z, Huang S, et al. Blood transfusion risk prediction in spinal tuberculosis surgery: development and assessment of a novel predictive nomogram. BMC Musculoskelet Disord. 2022;23(1):182.

26. Namba T, Ueno M, Inoue G, et al. Prediction tool for high risk of surgical site infection in spinal surgery. Infect Control Hospital Epidemiol. 2020;41(7):799804.

27. Haddad S, Millhouse PW, Maltenfort M, Restrepo C, Kepler CK, Vaccaro AR. Diagnosis and neurologic status as predictors of surgical site infection in primary cervical spinal surgery. Spine J. 2016;16(5):632642.

28. Bohl DD, Shen MR, Mayo BC, et al. Malnutrition predicts infectious and wound complications following posterior lumbar spinal fusion. Spine. 2016;41(21):16931699.

29. Hopkins BS, Mazmudar A, Driscoll C, et al. Using artificial intelligence (AI) to predict postoperative surgical site infection: a retrospective cohort of 4046 posterior spinal fusions. Clin Neurol Neurosurg. 2020;192:105718.

30. Dudli S, Fields AJ, Samartzis D, Karppinen J, Lotz JC. Pathobiology of Modic changes. Eur Spine J. 2016;25(11):37233734.

31. Vigeland MD, Flm ST, Vigeland MD, et al. Correlation between gene expression and MRI STIR signals in patients with chronic low back pain and Modic changes indicates immune involvement. Sci Rep. 2022;12(1):215.

32. Pradip IA, Dilip Chand Raja S, Rajasekaran S, et al. Presence of preoperative Modic changes and severity of endplate damage score are independent risk factors for developing postoperative surgical site infection: a retrospective case-control study of 1124 patients. Eur Spine J. 2021;30(6):17321743.

33. Ohtori S, Inoue G, Ito T, et al. Tumor necrosis factor-immunoreactive cells and PGP 9.5-immunoreactive nerve fibers in vertebral endplates of patients with discogenic low back Pain and Modic Type 1 or Type 2 changes on MRI. Spine. 2006;31(9):10261031.

34. Yuan K, Chen HL. Obesity and surgical site infections risk in orthopedics: a meta-analysis. Int j Surgery. 2013;11(5):383388.

35. Lynch RJ, Ranney DN, Shijie C, Lee DS, Samala N, Englesbe MJ. Obesity, surgical site infection, and outcome following renal transplantation. Ann Surg. 2009;250(6):10141020.

36. Onyekwelu I, Glassman SD, Asher AL, Shaffrey CI, Mummaneni PV, Carreon LY. Impact of obesity on complications and outcomes: a comparison of fusion and nonfusion lumbar spine surgery. J Neurosurg Spine. 2017;26(2):158162.

37. Lee JS, Terjimanian MN, Tishberg LM, et al. Surgical site infection and analytic morphometric assessment of body composition in patients undergoing midline laparotomy. J Am Coll Surg. 2011;213(2):236244.

38. Inacio MC, Kritz-Silverstein D, Raman R, et al. The risk of surgical site infection and re-admission in obese patients undergoing total joint replacement who lose weight before surgery and keep it off postoperatively. Bone Joint J. 2014;96-b(5):629635.

39. Kim BD, Smith TR, Lim S, Cybulski GR, Kim JY. Predictors of unplanned readmission in patients undergoing lumbar decompression: multi-institutional analysis of 7016 patients. J Neurosurg Spine. 2014;20(6):606616.

40. Rammell J, Perre D, Boylan L, et al. The adverse impact of preoperative anaemia on survival following major lower limb amputation. Vascular. 2022;17085381211065622.

41. Lasocki S, Krauspe R, von Heymann C, Mezzacasa A, Chainey S, Spahn DR. PREPARE: the prevalence of perioperative anaemia and need for patient blood management in elective orthopaedic surgery: a multicentre, observational study. Eur J Anaesthesiol. 2015;32(3):160167.

42. Higgins RM, Helm MC, Kindel TL, Gould JC. Perioperative blood transfusion increases risk of surgical site infection after bariatric surgery. Surg Obesity Related Dis. 2019;15(4):582587.

43. Berros-Torres SI, Umscheid CA, Bratzler DW, et al. Centers for Disease Control and Prevention Guideline for the Prevention of Surgical Site Infection, 2017. JAMA Surg. 2017;152(8):784791.

44. Hagedorn JM, Bendel MA, Hoelzer BC, Aiyer R, Caraway D. Preoperative hemoglobin A1c and perioperative blood glucose in patients with diabetes mellitus undergoing spinal cord stimulation surgery: a literature review of surgical site infection risk. Pain Pract. 2022:76.

45. Pennington Z, Lubelski D, Westbroek EM, Ahmed AK, Passias PG, Sciubba DM. Persistent Postoperative Hyperglycemia as a Risk Factor for Operative Treatment of Deep Wound Infection After Spine Surgery. Neurosurgery. 2020;87(2):211219.

46. Liu JM, Deng HL, Chen XY, et al. Risk Factors for Surgical Site Infection After Posterior Lumbar Spinal Surgery. Spine. 2018;43(10):732737.

Read more from the original source:
Using machine learning to predict surgical site infection | IDR - Dove Medical Press

Read More..

Protect AI Acquires huntr; Launches Worlds First Artificial Intelligence and Machine Learning Bug Bounty Platform – Yahoo Finance

huntr provides a platform to help security researchers discover, disclose, remediate, and be rewarded for AI and ML security threats

LAS VEGAS, August 08, 2023--(BUSINESS WIRE)--Protect AI, the leading artificial intelligence (AI) and machine learning (ML) security company, today announced the launch of huntr, a groundbreaking AI/ML bug bounty platform focused exclusively on protecting AI/ML open-source software (OSS), foundational models, and ML Systems. The company is a silver sponsor at Black Hat USA, Booth 2610.

The launch of the huntr AI/ML bug bounty platform comes as a result of the acquisition of huntr.dev by Protect AI. Originally founded in 2020 by 418Sec Founder, Adam Nygate, huntr.dev quickly rose to become the world's 5th largest Certified Naming Authority (CNA) for Common Vulnerabilities and Exposures (CVEs) in 2022. With a vast network of over ten-thousand security researchers specializing in open-source software (OSS), huntr has been at the forefront of OSS security research and development. This success provides an opportunity for Protect AI to focus this platform on a critical and emerging need for AI/ML threat research.

In today's AI-powered world, nearly 80% of code in Big Data, AI, BI, and ML codebases relies on open-source components, according to Synopsys, with more than 40% of these codebases harboring high-risk vulnerabilities. In one example, Protect AI researchers found a critical Local File Inclusion/Remote File Inclusion vulnerability in MLflow, a widely used system for managing machine learning life cycles, which could enable attackers to gain full access to a cloud account, steal proprietary data, and expose critical IP in the form of ML models.

Furthermore, there is a critical lack of AI/ML skills and expertise in the field of security research that are able to find these AI security threats. This has led to an urgent need for comprehensive AI/ML security research, with the focus on uncovering potential security flaws and safeguarding sensitive data and AI application integrity for enterprises.

Story continues

"The vast artificial intelligence and machine learning supply chain is a leading area of risk for enterprises deploying AI capabilities. Yet, the intersection of security and AI remains underinvested. With huntr, we will foster an active community of security researchers, to meet the demand for discovering vulnerabilities within these models and systems," said Ian Swanson, CEO of Protect AI.

"With this acquisition by Protect AI, huntr's mission now exclusively centers on discovering and addressing OSS AI/ML vulnerabilities, promoting trust, data security, and responsible AI/ML deployment. We're thrilled to expand our reward system for researchers and hackers within our community and beyond," said Adam Nygate, founder and CEO of huntr.dev.

The New huntr Platform

huntr offers security researchers a comprehensive AI/ML bug hunting environment with intuitive navigation, targeted bug bounties with streamlined reporting, monthly contests, collaboration tools, vulnerability reviews, and the highest paying AI/ML bounties available to the hacking community. The first contest is focused on Hugging Face Transformers offering an impressive $50,000 reward.

huntr also bridges the critical knowledge gap in AI/ML security research and operates as an integral part of Protect AIs Machine Learning Security Operations (MLSecOps) community. By actively participating in huntr's AI/ML open-source-focused bug bounty platform, security researchers can build new expertise in AI/ML security, create new professional opportunities, and receive well-deserved financial rewards.

"AI and ML rely on open source software, but security research in these systems is often overlooked. huntr's launch for AI/ML security research is an exciting moment to unite and empower hackers in safeguarding the future of AI and ML from emerging threats," said Phil Wylie, a renowned Pentester.

Chlo Messdaghi, Head of Threat Research at Protect AI, emphasized the platform's ethos, stating, "We believe in transparency and fair compensation. Our mission is to cut through the noise and provide huntrs with a platform that recognizes their contributions, rewards their expertise, and fosters a community of collaboration and knowledge sharing."

Protect AI is a Skynet sponsor at DEF CONs AI Village, where Ms. Messdaghi will be chair of a panel entitled, "Unveiling the Secrets: Breaking into AI/ML Security Bug Bounty Hunting," on Friday, August 11, at 4:00pm. The company is also a silver sponsor at Black Hat USA. These events will provide the opportunity for Protect AIs threat research team to connect in person with the security research community. To find out more, and become an AI/ML huntr, join the community at huntr.mlsecops.com. For information on participating in Protect AIs sessions at Black Hat and DEF CON visit us on LinkedIn and Twitter.

About Protect AI

Protect AI enables safer AI applications by providing organizations the ability to see, know and manage their ML environments. The company's AI Radar platform provides visibility into the ML attack surface by creating a ML Bill of Materials (MLBOM), remediates security vulnerabilities and detects threats to prevent data and secrets leakages. Founded by AI leaders from Amazon and Oracle, Protect AI is funded by Acrew Capital, boldstart ventures, Evolution Equity Partners, Knollwood Capital, Pelion Ventures and Salesforce Ventures. The company is headquartered in Seattle, with offices in Dallas and Raleigh. For more information visit us on the web, and follow us on LinkedIn and X/Twitter.

View source version on businesswire.com: https://www.businesswire.com/news/home/20230808746694/en/

Contacts

Media:Marc GendronMarc Gendron PR for Protect AImarc@mgpr.net 617-877-7480

More:
Protect AI Acquires huntr; Launches Worlds First Artificial Intelligence and Machine Learning Bug Bounty Platform - Yahoo Finance

Read More..

The Technological Triad: 5G, Machine Learning, and Cloud … – Fagen wasanni

Exploring the Technological Triad: 5G, Machine Learning, and Cloud Computing in the Modern World

In the modern world, the technological triad of 5G, machine learning, and cloud computing is shaping the future of digital transformation. These three technologies are not only revolutionizing the way we live and work, but they are also driving the next wave of technological innovation.

5G, the fifth generation of wireless technology, is at the forefront of this technological triad. With its high-speed data transmission and low latency, 5G is set to revolutionize the way we communicate and interact with technology. It promises to enable a new era of smart cities, autonomous vehicles, and Internet of Things (IoT) devices, all of which require real-time data transmission and processing. Moreover, 5G is expected to provide the necessary infrastructure for the other two components of the technological triad, machine learning and cloud computing, to reach their full potential.

Machine learning, a subset of artificial intelligence (AI), is another key player in this technological triad. It involves the use of algorithms and statistical models to enable computers to perform tasks without explicit programming. In other words, machine learning allows computers to learn from data and make predictions or decisions without being explicitly programmed to do so. This technology is already being used in a wide range of applications, from recommendation systems and voice recognition to fraud detection and autonomous vehicles. With the advent of 5G, machine learning is expected to become even more prevalent as it will be able to process and analyze data in real-time, leading to more accurate and timely predictions and decisions.

The third component of the technological triad is cloud computing. This technology involves the delivery of computing services, including servers, storage, databases, networking, software, analytics, and intelligence, over the Internet (the cloud). Cloud computing offers several benefits, including cost savings, increased productivity, speed and efficiency, performance, and security. It also provides the necessary infrastructure for machine learning and 5G to function effectively. With cloud computing, businesses can store and process large amounts of data, run applications, and deliver services on a global scale. Moreover, with the advent of 5G, cloud computing is expected to become even more powerful as it will be able to process and analyze data at unprecedented speeds.

In conclusion, the technological triad of 5G, machine learning, and cloud computing is set to revolutionize the modern world. These three technologies are not only driving the next wave of technological innovation, but they are also shaping the future of digital transformation. With 5G, we can expect to see a new era of smart cities, autonomous vehicles, and IoT devices. With machine learning, we can expect to see more accurate and timely predictions and decisions. And with cloud computing, we can expect to see businesses delivering services on a global scale. As we move forward, it will be interesting to see how these three technologies continue to evolve and shape our world.

Follow this link:
The Technological Triad: 5G, Machine Learning, and Cloud ... - Fagen wasanni

Read More..

INT Simplifies Machine Learning and Processing and Augments Analytics Capabilities with Latest Release of – Benzinga

August 10, 2023 10:15 AM | 3 min read

Massive returns are possible within this market! For a limited time, get access to the Benzinga Insider Report, usually $47/month, for just $0.99! Discover extremely undervalued stock picks before they skyrocket! Time is running out! Act fast and secure your future wealth at this unbelievable discount! Claim Your $0.99 Offer NOW!

Advertorial

The latest release of IVAAP by INT introduces an array of exciting new features and enhancements, providing users with unparalleled capabilities to extract deeper insights from their subsurface data.

HOUSTON , Aug. 10, 2023 /PRNewswire-PRWeb/ -- INT announced today the launch of IVAAP 2.11, the latest version of our Universal Cloud Data Visualization Platform. With powerful features and enhanced capabilities, IVAAP 2.11 takes subsurface data exploration and visualization to new heights, empowering users to make critical decisions with confidence and efficiency.

Enter your email and you'll also get Benzinga's ultimate morning update AND a free $30 gift card and more!

Some of the key highlights include:

"IVAAP 2.11 represents a significant milestone in our journey toward providing the oil and gas industry with the most advanced and comprehensive data visualization platform. With the introduction of external workflow support for machine learning and data processing and full compatibility with the OSDU Data Platform, IVAAP continues to empower geoscientists and engineers to explore, visualize, and automate their data like never before," said Hugues Thevoux, VP of Cloud Solutions at INT. "This release underscores our commitment to delivering cutting-edge solutions that drive efficiency, foster innovation, and enable our clients to make smarter decisions with confidence."

IVAAP 2.11 is now available for all existing users. To experience the power of IVAAP or to schedule a personalized demo, visitint.com/demo-gallery/ivaap/or contact our sales team atintinfo@int.com.

To learn more about IVAAP 2.11, please visitint.com/ivaap/.

ABOUT IVAAP:

IVAAP is a Universal Cloud Data Visualization Platform where users can explore domain data, visualize 2D/3D G&G data (wells, seismic, horizons, surface), and perform data automation by integrating with external processing workflows and machine learning.

ABOUT INT:

INT software empowers the largest energy and services companies in the world to visualize their complex subsurface data (seismic, well log, reservoir, and schematics in 2D/3D). INT offers a visualization platform (IVAAP) and libraries (GeoToolkit) developers can use with their data ecosystem to deliver subsurface solutions (Exploration, Drilling, Production). INT's powerful HTML5/JavaScript technology can be used for data aggregation, API services, and high-performance visualization of G&G and petrophysical data in a browser. INT simplifies complex subsurface data visualization.

For more information about IVAAP or INT's other data visualization products, please visithttps://www.int.com.

INT, the INT logo, and IVAAP are trademarks of Interactive Network Technologies, Inc., in the United States and/or other countries.

Pull Quote

IVAAP 2.11 represents a significant milestone in our journey toward providing the O&G industry with the most advanced and comprehensive data visualization. With this release, IVAAP continues to empower geoscientists and engineers to explore, visualize, and automate their data like never before.

Media Contact

Claudia Juarez, INT, 1 7139757434, marketing@int.com, http://www.int.com

LinkedIn

SOURCE INT

Massive returns are possible within this market! For a limited time, get access to the Benzinga Insider Report, usually $47/month, for just $0.99! Discover extremely undervalued stock picks before they skyrocket! Time is running out! Act fast and secure your future wealth at this unbelievable discount! Claim Your $0.99 Offer NOW!

Advertorial

2023 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Visit link:
INT Simplifies Machine Learning and Processing and Augments Analytics Capabilities with Latest Release of - Benzinga

Read More..

Professor in Artificial Intelligence and Machine Learning job with … – Times Higher Education

Job description

Edinburgh Napier University is the #1 Modern University in Scotland. An innovative, learner centric university with a modern and fresh outlook, Edinburgh Napier is ambitious, inclusive in its ethos and applied in its approach.

Edinburgh Napier Universitys phenomenal results from the Research Excellence Framework (2021) are testament to our growing strength and capability as a research institution. These results, alongside our consistently positive National Student Survey results and sustained high levels of graduate employability, demonstrate the increasing impact of Edinburgh Napier's collective work, quality and commitment.

REF 2021 assessed 68% of our research as either world-leading or internationally excellent, up 15% since 2014. Additionally, the Universitys research power metric rocketed from 250 to 718, making Edinburgh Napier the top ranking Scottish modern university for both research power and research impact.

The Universitys improved power rating will now see our research funding increase as we take significant strides to grow our reputation as a research-focused institution as well as a teaching one. Through continuous investment in staff and our research environment, we are confident that we are well on our way to establishing ourselves as one of the UKs world-leading universities in research.

The School of Computing, Engineering & the Built Environment has over 200 academics, and around 3,100 campus-based students, and delivers programmes with professional accreditations from the British Computer Society, Institution of Engineering and Technology, The Chartered Institute of Building and other accreditation bodies. We have excellent computing, engineering and construction lab facilities. The School has embarked on a major development in the area of Industry 4.0, bringing together computer science, engineering, mathematics and construction technology. We are one of the UK's largest computer science academic units with key strengths in AI, cyber security and creative and social informatics. We house leading UK research centres in transport policy and sustainable construction. The schools are based in the lively and exciting Merchiston area at the heart of Edinburgh, Scotland's inspiring capital.

The latest UK national research assessment, REF 2021, places our Computer Science research in the top-30 in the whole UK and 3rd best in Scotland (both in power ranking). In terms of research impact 100% of our work achieved the highest rating (4*), a performance achieved only by six other universities in the whole UK. Our research is underpinned by significant amounts of funding from prestigious sources including both EPSRC and Horizon 2020.

This is a great opportunity for an experienced academic with expertise in Artificial Intelligence and Machine Learning or related fields to the work of the experienced team exploring Search-based optimisation, Evolutionary Robotics, Natural Language Generation, Multi-Modal Healthcare Data Analytics. As a professor you will be expected to contribute to the leadership of the research group as well, especially in terms of driving the research agenda and leading the exploration of new foundational research areas. Areas of desirable expertise include, but are not limited to: machine learning with applications to robotics, machine learning theoretical foundations, machine learning applied to biomedical data and healthcare, search-based optimisation, deep learning systems, adversarial machine learning, generative models in machine learning, natural language processing with machine learning, explainable machine learning, neuromorphic machine learning systems. With 80% time allocation for research, this role will allow you to explore novel and emerging areas of artificial intelligence and machine learning, deliver excellent quality research papers and secure substantial external research funding.

The Professor in Artificial Intelligence and Machine Learning will contribute to and build programmes and modules to support the expansion of the Schools teaching portfolio which explores the changing nature of IT infrastructure, AI and ML applications to big data, business intelligence, and the impact of technology on business. You will actively contribute to our existing portfolio of computer science based degree programmes.

We are looking for someone who can demonstrate enthusiasm for working in a cross-disciplinary manner in fundamental and applied research and in the development of research-informed teaching to enhance employability of our graduates. You will have the opportunity to expand your industry connections through our existing networks.

Further information about Edinburgh Napier University can be found here.

The Role

As a professor you will be a member of our Artificial Intelligence research group with:

Applicants must demonstrate:

Applicants preferably will also demonstrate:

If you would like to know, more about this exciting opportunity please click here to view our Grade 8-10 (level 1-3) role profiles.

How will we reward you?

Salary: 65,000 - 95,000 per annum (Grade 8-10; Level 1-3)

As the #1 Modern University in Scotland, Edinburgh Napier is here to make a difference. This is only possible because of the people that work here its our people that make us great. And with our people at the heart of what we do, its important that you are supported and rewarded.

We are committed to providing a wide range of benefits including:

Further information about our benefits can be found here.

Additional Information

Informal enquiries about the role can be made to Professor Peter Andras (p.andras@napier.ac.uk) or Professor Ben Paechter (b.paechter@napier.ac.uk).

Applications for the role must be submitted via the Edinburgh Napier University job applications web site emailed applications will not be accepted.

Application closing date: Tuesday 15 August @ 11:59pm

Edinburgh Napier is committed to creating an environment where everyone feels proud, confident, challenged and supported and are holders of Disability Confident, Carer Positive and Stonewall Diversity Champion status. More details can be found here.

The rest is here:
Professor in Artificial Intelligence and Machine Learning job with ... - Times Higher Education

Read More..

AI Is Building Highly Effective Antibodies That Humans Can’t Even … – WIRED

James Field, founder and CEO of LabGenius.

The tests are almost fully automated, with an array of high-end equipment involved in preparing samples and running them through the various stages of the testing process: Antibodies are grown based on their genetic sequence and then put to the test on biological assayssamples of the diseased tissue that theyve been designed to tackle. Humans oversee the process, but their job is largely to move samples from one machine to the next.

When you have the experimental results from that first set of 700 molecules, that information gets fed back to the model and is used to refine the models understanding of the space, says Field. In other words, the algorithm begins to build a picture of how different antibody designs change the effectiveness of treatmentwith each subsequent round of antibody designs, it gets better, carefully balancing exploitation of potentially fruitful designs with exploration of new areas.

A challenge with conventional protein engineering is, as soon as you find something that works a bit, you tend to make a very large number of very small tweaks to that molecule to see if you can further refine it, Field says. Those tweaks may improve one propertyhow easily the antibody can be made at scale, for instancebut have a disastrous effect on the many other attributes required, such as selectivity, toxicity, potency, and more. The conventional approach means you may be barking up the wrong tree, or missing the wood for the treesendlessly optimizing something that works a little bit, when there may be far better options in a completely different part of the map.

Youre also constrained by the number of tests you can run, or the number of shots on goal, as Field puts it. This means human protein-engineers tend to look for things they know will work. As a result of that, you get all of these heuristics or rules of thumb that human protein-engineers do to try and find the safe spaces, Field says. But as a consequence of that you quickly get the accumulation of dogma.

The LabGenius approach yields unexpected solutions that humans may not have thought of, and finds them more quickly: It takes just six weeks from setting up a problem to finishing the first batch, all directed by machine learning models. LabGenius has raised $28 million from the likes of Atomico and Kindred, and is beginning to partner with pharmaceutical companies, offering its services like a consultancy. Field says the automated approach could be rolled out to other forms of drug discovery too, turning the long, artisanal process of drug discovery into something more streamlined.

Ultimately, Field says, its a recipe for better care: antibody treatments that are more effective, or have fewer side effects than existing ones designed by humans. You find molecules that you would never have found using conventional methods, he says. Theyre very distinct and often counterintuitive to designs that you as a human would come up withwhich should enable us to find molecules with better properties, which ultimately translates into better outcomes for patients.

This article appears in the September/October 2023 edition of WIRED UK magazine.

See original here:
AI Is Building Highly Effective Antibodies That Humans Can't Even ... - WIRED

Read More..

Gavel+to+Gavel%3A+Protecting+your+IP+in+the+age+of+AI – Journal Record

Drew Palmer

The recent advent of commercially available machine learning systems and other forms of artificial intelligence has presented businesses with countless possibilities to integrate these capabilities into their operations, unlocking efficiency and gaining a competitive edge. But as with any innovation, concerns arise over how other members of the supply chain may utilize business or customer data and intellectual property in unforeseen ways.

Recently, Zoom posted a blog article that shed light on this issue, discussing a change in its terms of service. The post, published five months after the updated terms were released, aimed to enhance transparency and clarify how Zoom utilizes customer data to train its AI models. While Zoom assured its customers that their data would only be used with consent, the post also disclosed that such consent would be obtained through a pop-up window presented at the moment a user chose to use any of Zooms AI features, giving users little time to read or consider the impact of providing that consent. This situation underscores the importance for organizations to understand how their suppliers leverage their data and intellectual assets, particularly with respect to AI systems that can repurpose assets in novel ways.

Organizations also must collaborate with their technology suppliers to establish necessary controls to ensure contractual compliance. This starts by integrating contractual clauses into legal agreements to regulate data and IP asset usage by requiring discrete controls on such usage. By including these types of safeguards, companies can better control how their data is used.

Every business should carefully evaluate its procurement and supply contracts to address these intellectual property concerns. By crafting specific terms that prohibit any unauthorized future use of customer data and intellectual assets, businesses can better protect their data and intellectual property rights as technology continues to evolve. Protecting these assets remains crucial to mitigate potential risks. To achieve this, businesses can require the use of advanced encryption methods, conduct regular audits, and ensure compliance through other relevant data protection safeguards.

As businesses embrace the benefits of AI and machine learning, it is vital that they take proactive measures to safeguard their data and intellectual assets. By understanding how technologies utilize their information and establishing robust contractual agreements, organizations can mitigate potential risks and confidently embrace the power of AI for a competitive advantage in the market.

Drew Palmer is an attorney with Crowe & Dunlevy, crowedunlevy.com, and a member of the Intellectual Property Practice Group.

See the article here:
Gavel+to+Gavel%3A+Protecting+your+IP+in+the+age+of+AI - Journal Record

Read More..

AI in Education – EducationNext

In Neal Stephensons 1995 science fiction novel, The Diamond Age, readers meet Nell, a young girl who comes into possession of a highly advanced book, The Young Ladys Illustrated Primer. The book is not the usual static collection of texts and images but a deeply immersive tool that can converse with the reader, answer questions, and personalize its content, all in service of educating and motivating a young girl to be a strong, independent individual.

Such a device, even after the introduction of the Internet and tablet computers, has remained in the realm of science fictionuntil now. Artificial intelligence, or AI, took a giant leap forward with the introduction in November 2022 of ChatGPT, an AI technology capable of producing remarkably creative responses and sophisticated analysis through human-like dialogue. It has triggered a wave of innovation, some of which suggests we might be on the brink of an era of interactive, super-intelligent tools not unlike the book Stephenson dreamed up for Nell.

Sundar Pichai, Googles CEO, calls artificial intelligence more profound than fire or electricity or anything we have done in the past. Reid Hoffman, the founder of LinkedIn and current partner at Greylock Partners, says, The power to make positive change in the world is about to get the biggest boost its ever had. And Bill Gates has said that this new wave of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone.

Over the last year, developers have released a dizzying array of AI tools that can generate text, images, music, and video with no need for complicated coding but simply in response to instructions given in natural language. These technologies are rapidly improving, and developers are introducing capabilities that would have been considered science fiction just a few years ago. AI is also raising pressing ethical questions around bias, appropriate use, and plagiarism.

In the realm of education, this technology will influence how students learn, how teachers work, and ultimately how we structure our education system. Some educators and leaders look forward to these changes with great enthusiasm. Sal Kahn, founder of Khan Academy, went so far as to say in a TED talk that AI has the potential to effect probably the biggest positive transformation that education has ever seen. But others warn that AI will enable the spread of misinformation, facilitate cheating in school and college, kill whatever vestiges of individual privacy remain, and cause massive job loss. The challenge is to harness the positive potential while avoiding or mitigating the harm.

What Is Generative AI?

Artificial intelligence is a branch of computer science that focuses on creating software capable of mimicking behaviors and processes we would consider intelligent if exhibited by humans, including reasoning, learning, problem-solving, and exercising creativity. AI systems can be applied to an extensive range of tasks, including language translation, image recognition, navigating autonomous vehicles, detecting and treating cancer, and, in the case of generative AI, producing content and knowledge rather than simply searching for and retrieving it.

Foundation models in generative AI are systems trained on a large dataset to learn a broad base of knowledge that can then be adapted to a range of different, more specific purposes. This learning method is self-supervised, meaning the model learns by finding patterns and relationships in the data it is trained on.

Large Language Models (LLMs) are foundation models that have been trained on a vast amount of text data. For example, the training data for OpenAIs GPT model consisted of web content, books, Wikipedia articles, news articles, social media posts, code snippets, and more. OpenAIs GPT-3 models underwent training on a staggering 300 billion tokens or word pieces, using more than 175 billion parameters to shape the models behaviornearly 100 times more data than the companys GPT-2 model had.

By doing this analysis across billions of sentences, LLM models develop a statistical understanding of language: how words and phrases are usually combined, what topics are typically discussed together, and what tone or style is appropriate in different contexts. That allows it to generate human-like text and perform a wide range of tasks, such as writing articles, answering questions, or analyzing unstructured data.

LLMs include OpenAIs GPT-4, Googles PaLM, and Metas LLaMA. These LLMs serve as foundations for AI applications. ChatGPT is built on GPT-3.5 and GPT-4, while Bard uses Googles Pathways Language Model 2 (PaLM 2) as its foundation.

Some of the best-known applications are:

ChatGPT 3.5. The free version of ChatGPT released by OpenAI in November 2022. It was trained on data only up to 2021, and while it is very fast, it is prone to inaccuracies.

ChatGPT 4.0. The newest version of ChatGPT, which is more powerful and accurate than ChatGPT 3.5 but also slower, and it requires a paid account. It also has extended capabilities through plug-ins that give it the ability to interface with content from websites, perform more sophisticated mathematical functions, and access other services. A new Code Interpreter feature gives ChatGPT the ability to analyze data, create charts, solve math problems, edit files, and even develop hypotheses to explain data trends.

Microsoft Bing Chat. An iteration of Microsofts Bing search engine that is enhanced with OpenAIs ChatGPT technology. It can browse websites and offers source citations with its results.

Google Bard. Googles AI generates text, translates languages, writes different kinds of creative content, and writes and debugs code in more than 20 different programming languages. The tone and style of Bards replies can be finetuned to be simple, long, short, professional, or casual. Bard also leverages Google Lens to analyze images uploaded with prompts.

Anthropic Claude 2. A chatbot that can generate text, summarize content, and perform other tasks, Claude 2 can analyze texts of roughly 75,000 wordsabout the length of The Great Gatsbyand generate responses of more than 3,000 words. The model was built using a set of principles that serve as a sort of constitution for AI systems, with the aim of making them more helpful, honest, and harmless.

These AI systems have been improving at a remarkable pace, including in how well they perform on assessments of human knowledge. OpenAIs GPT-3.5, which was released in March 2022, only managed to score in the 10th percentile on the bar exam, but GPT-4.0, introduced a year later, made a significant leap, scoring in the 90th percentile. What makes these feats especially impressive is that OpenAI did not specifically train the system to take these exams; the AI was able to come up with the correct answers on its own. Similarly, Googles medical AI model substantially improved its performance on a U.S. Medical Licensing Examination practice test, with its accuracy rate jumping to 85 percent in March 2021 from 33 percent in December 2020.

These two examples prompt one to ask: if AI continues to improve so rapidly, what will these systems be able to achieve in the next few years? Whats more, new studies challenge the assumption that AI-generated responses are stale or sterile. In the case of Googles AI model, physicians preferred the AIs long-form answers to those written by their fellow doctors, and nonmedical study participants rated the AI answers as more helpful. Another study found that participants preferred a medical chatbots responses over those of a physician and rated them significantly higher, not just for quality but also for empathy. What will happen when empathetic AI is used in education?

Other studies have looked at the reasoning capabilities of these models. Microsoft researchers suggest that newer systems exhibit more general intelligence than previous AI models and are coming strikingly close to human-level performance. While some observers question those conclusions, the AI systems display an increasing ability to generate coherent and contextually appropriate responses, make connections between different pieces of information, and engage in reasoning processes such as inference, deduction, and analogy.

Despite their prodigious capabilities, these systems are not without flaws. At times, they churn out information that might sound convincing but is irrelevant, illogical, or entirely falsean anomaly known as hallucination. The execution of certain mathematical operations presents another area of difficulty for AI. And while these systems can generate well-crafted and realistic text, understanding why the model made specific decisions or predictions can be challenging.

The Importance of Well-Designed Prompts

Using generative AI systems such as ChatGPT, Bard, and Claude 2 is relatively simple. One has only to type in a request or a task (called a prompt), and the AI generates a response. Properly constructed prompts are essential for getting useful results from generative AI tools. You can ask generative AI to analyze text, find patterns in data, compare opposing arguments, and summarize an article in different ways (see sidebar for examples of AI prompts).

One challenge is that, after using search engines for years, people have been preconditioned to phrase questions in a certain way. A search engine is something like a helpful librarian who takes a specific question and points you to the most relevant sources for possible answers. The search engine (or librarian) doesnt create anything new but efficiently retrieves whats already there.

Generative AI is more akin to a competent intern. You give a generative AI tool instructions through prompts, as you would to an intern, asking it to complete a task and produce a product. The AI interprets your instructions, thinks about the best way to carry them out, and produces something original or performs a task to fulfill your directive. The results arent pre-made or stored somewheretheyre produced on the fly, based on the information the intern (generative AI) has been trained on. The output often depends on the precision and clarity of the instructions (prompts) you provide. A vague or poorly defined prompt might lead the AI to produce less relevant results. The more context and direction you give it, the better the result will be. Whats more, the capabilities of these AI systems are being enhanced through the introduction of versatile plug-ins that equip them to browse websites, analyze data files, or access other services. Think of this as giving your intern access to a group of experts to help accomplish your tasks.

One strategy in using a generative AI tool is first to tell it what kind of expert or persona you want it to be. Ask it to be an expert management consultant, a skilled teacher, a writing tutor, or a copy editor, and then give it a task.

Prompts can also be constructed to get these AI systems to perform complex and multi-step operations. For example, lets say a teacher wants to create an adaptive tutoring programfor any subject, any grade, in any languagethat customizes the examples for students based on their interests. She wants each lesson to culminate in a short-response or multiple-choice quiz. If the student answers the questions correctly, the AI tutor should move on to the next lesson. If the student responds incorrectly, the AI should explain the concept again, but using simpler language.

Previously, designing this kind of interactive system would have required a relatively sophisticated and expensive software program. With ChatGPT, however, just giving those instructions in a prompt delivers a serviceable tutoring system. It isnt perfect, but remember that it was built virtually for free, with just a few lines of English language as a command. And nothing in the education market today has the capability to generate almost limitless examples to connect the lesson concept to students interests.

Chained prompts can also help focus AI systems. For example, an educator can prompt a generative AI system first to read a practice guide from the What Works Clearinghouse and summarize its recommendations. Then, in a follow-up prompt, the teacher can ask the AI to develop a set of classroom activities based on what it just read. By curating the source material and using the right prompts, the educator can anchor the generated responses in evidence and high-quality research.

However, much like fledgling interns learning the ropes in a new environment, AI does commit occasional errors. Such fallibility, while inevitable, underlines the critical importance of maintaining rigorous oversight of AIs output. Monitoring not only acts as a crucial checkpoint for accuracy but also becomes a vital source of real-time feedback for the system. Its through this iterative refinement process that an AI system, over time, can significantly minimize its error rate and increase its efficacy.

Uses of AI in Education

In May 2023, the U.S. Department of Education released a report titled Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations. The department had conducted listening sessions in 2022 with more than 700 people, including educators and parents, to gauge their views on AI. The report noted that constituents believe that action is required now in order to get ahead of the expected increase of AI in education technologyand they want to roll up their sleeves and start working together. People expressed anxiety about future potential risks with AI but also felt that AI may enable achieving educational priorities in better ways, at scale, and with lower costs.

AI could serveor is already servingin several teaching-and-learning roles:

Instructional assistants. AIs ability to conduct human-like conversations opens up possibilities for adaptive tutoring or instructional assistants that can help explain difficult concepts to students. AI-based feedback systems can offer constructive critiques on student writing, which can help students fine-tune their writing skills. Some research also suggests certain kinds of prompts can help children generate more fruitful questions about learning. AI models might also support customized learning for students with disabilities and provide translation for English language learners.

Teaching assistants. AI might tackle some of the administrative tasks that keep teachers from investing more time with their peers or students. Early uses include automated routine tasks such as drafting lesson plans, creating differentiated materials, designing worksheets, developing quizzes, and exploring ways of explaining complicated academic materials. AI can also provide educators with recommendations to meet student needs and help teachers reflect, plan, and improve their practice.

Parent assistants. Parents can use AI to generate letters requesting individualized education plan (IEP) services or to ask that a child be evaluated for gifted and talented programs. For parents choosing a school for their child, AI could serve as an administrative assistant, mapping out school options within driving distance of home, generating application timelines, compiling contact information, and the like. Generative AI can even create bedtime stories with evolving plots tailored to a childs interests.

Administrator assistants. Using generative AI, school administrators can draft various communications, including materials for parents, newsletters, and other community-engagement documents. AI systems can also help with the difficult tasks of organizing class or bus schedules, and they can analyze complex data to identify patterns or needs. ChatGPT can perform sophisticated sentiment analysis that could be useful for measuring school-climate and other survey data.

Though the potential is great, most teachers have yet to use these tools. A Morning Consult and EdChoice poll found that while 60 percent say theyve heard about ChatGPT, only 14 percent have used it in their free time, and just 13 percent have used it at school. Its likely that most teachers and students will engage with generative AI not through the platforms themselves but rather through AI capabilities embedded in software. Instructional providers such as Khan Academy, Varsity Tutors, and DuoLingo are experimenting with GPT-4-powered tutors that are trained on datasets specific to these organizations to provide individualized learning support that has additional guardrails to help protect students and enhance the experience for teachers.

Googles Project Tailwind is experimenting with an AI notebook that can analyze student notes and then develop study questions or provide tutoring support through a chat interface. These features could soon be available on Google Classroom, potentially reaching over half of all U.S. classrooms. Brisk Teaching is one of the first companies to build a portfolio of AI services designed specifically for teachersdifferentiating content, drafting lesson plans, providing student feedback, and serving as an AI assistant to streamline workflow among different apps and tools.

Providers of curriculum and instruction materials might also include AI assistants for instant help and tutoring tailored to the companies products. One example is the edX Xpert, a ChatGPT-based learning assistant on the edX platform. It offers immediate, customized academic and customer support for online learners worldwide.

Regardless of the ways AI is used in classrooms, the fundamental task of policymakers and education leaders is to ensure that the technology is serving sound instructional practice. As Vicki Phillips, CEO of the National Center on Education and the Economy, wrote, We should not only think about how technology can assist teachers and learners in improving what theyre doing now, but what it means for ensuring that new ways of teaching and learning flourish alongside the applications of AI.

Go here to read the rest:

AI in Education - EducationNext

Read More..

AI is starting to affect elections and Wisconsin has yet to take action – PBS Wisconsin

By Phoebe Petrovic, Wisconsin Watch

This article was first published by Wisconsin Watch.

Heading into the 2024 election, Wisconsin faces a new challenge state lawmakers here have so far failed to address: generative artificial intelligence.

AI can draft a fundraising email or campaign graphics in seconds, no writing or design skills required. Or, as the Republican National Committee showed in April, it can conjure lifelike videos of China invading Taiwan or migrants crossing the U.S. border made entirely of fictional AI-generated footage.

More recently, a Super PAC supporting a Republican presidential candidates bid to make the Milwaukee debate stage on Aug. 23 used an AI-generated video of that candidate to fundraise which one campaign finance expert called an innovative way around campaign finance rules that would otherwise ban a Super PAC and candidate from coordinating on an ad.

Technology and election experts say AIs applications will both transform and threaten elections across the United States. And Wisconsin, a gerrymandered battleground that previously weathered baseless claims of election fraud, may face an acute risk.

Yet Wisconsin lawmakers have not taken official steps to regulate use of the technology in campaigning, even as other states and Congress introduce and begin to implement guardrails.

Rep. Scott Krug, R-Nekoosa, chair of the Assembly Committee on Campaigns and Elections, told Wisconsin Watch he hasnt related (AI) too much to elections just yet.

In the Senates Committee on Shared Revenue, Elections and Consumer Protection, it just hasnt come up yet, said Sen. Jeff Smith, D-Brunswick.

Election committee members in both chambers expressed interest in possible remedies but doubt that they could pass protections before the 2024 election cycle.

Rep. Clinton Anderson, D-Beloit, is drafting a bill that would mandate disclosure of AI, sometimes called synthetic media, in political ads, something experts call a basic step lawmakers could take to regulate the technology.

Wisconsin Rep. Clinton Anderson, D-Beloit, is working on a bill modeled on a Washington law that would require disclosure of the use of artificial intelligence in campaign ads. (Credit :Drake White-Bergey / Wisconsin Watch)

If we wait til 2024, its gonna be too late, Anderson said in an interview. If we can get this minimum thing done, then maybe we can have a conversation about, Whats the next step?'

No matter where you fall politically, I think you should want some transparency in campaigns, he added.

The Wisconsin Elections Commission declined to comment.

Several lawmakers said AI repackages old problems in new technology, noting voters have encountered deceptive visuals and targeted advertising before.

But generative AI makes such content cheaper, easier and faster to produce. New York Universitys Brennan Center for Justice notes that Russian-affiliated organizations spent more than $1 million a month in 2016 to produce manipulative political ads that could be created today with AI for a fraction of the cost.

Dietram Scheufele, who studies science communication and technology policy at the University of Wisconsin-Madison, said that while some of the doomsday predictions about AI are overblown, were definitely entering a new world.

The technology, he said, gets real creepy real fast.

Scheufele cited a prior study in which researchers morphed candidates faces with the participants own face in a way that remained undetectable to the participant. They found that people who were politically independent or weakly partisan were more likely to prefer the candidates whose faces had been unbeknownst to them morphed with their own.

This was done a long time ago before the idea of actually doing all of this in real time became a reality, Scheufele said. But today, the threshold for producing this stuff is really, really low.

Campaigns could micro-target constituents, crafting uniquely persuasive communications or advertisements by tailoring them to a persons digital footprint or likeness. Darrell West, who studies technology at the nonpartisan Brookings Institution, calls this precise message targeting, writing AI will allow campaigns to better focus on specific voting blocs with appeals that nudge them around particular policies and partisan opinions.

AI will also quicken the pace of communications and responses, permitting politicians to respond instantly to campaign developments, West wrote. AI can scan the internet, think about strategy, and come up with a hard-hitting appeal in minutes, without having to rely on highly paid consultants or expert videographers.

And because AI technology is more accessible, its not just well-funded campaigns or interest groups that might deploy it in elections. Mekela Panditharatne, counsel for the Brennan Centers Democracy Program, and Noah Giansiracusa, an assistant professor of mathematics and data science, described several ways outside actors might use the technology to deceive or influence voters.

Aside from using deepfakes to fabricate viral controversies, they could produce legions of social media posts about certain issues to create the illusion of political agreement or the false impression of widespread belief in dishonest election narratives, Panditharatne and Giansiracusa wrote. They could deploy tailored chatbots to customize interactions based on voter characteristics.

They could also use AI to target elections administrators, either through deluges of complaints from fake constituents or elaborate phishing schemes.

There is plenty of past election disinformation in the training data underlying current generative AI tools to render them a potential ticking time bomb for future election disinformation, Panditharatne and Giansiracusa wrote.

For Scheufele, one major concern is timing. It can take seconds for AI to create a deepfake; it can take days for reporters to debunk it. AI-driven disinformation deployed in the days before an election could sway voters in meaningful ways.

By the time people realized the content was fake, Scheufele said, the election is over and we have absolutely no constitutional way of relitigating it.

This is like making the wrong call in the last minute of the Super Bowl and the Patriots win the Super Bowl, even though they shouldnt have, Scheufele said. Theyre still going to be Super Bowl champions on Monday even though we all know that the wrong call was made.

In the abstract, every single aspect of AI is totally manageable, Scheufele said.

The problem is were dealing with so much in such a short period of time because of how quickly that technology develops, he said. We simply dont have the structures in place at the moment.

But Wisconsin lawmakers could take initial steps toward boosting transparency.

In May, Washington state passed a law requiring a clear disclaimer about AIs use in any political ad. Andersons team looked to Washingtons law as a model in drafting a Wisconsin bill.

Printed ads with manipulated images will need a disclosure in letters at least as big as any other letters in the ad, according to The Spokesman-Review. Manipulated audio must have an easily understood, spoken warning at the beginning and end of the commercial. For videos, a text disclosure must appear for the duration of the ad.

A similar bill addressing federal elections has been introduced in both chambers of Congress. A March 2020 proposal banning the distribution of deepfakes within 60 days of a federal election and creating criminal penalties went nowhere.

Krug called Washingtons law a pretty interesting idea.

If (an ad is) artificially created, there has to be some sort of a disclaimer, Krug said.

However, he indicated Republicans may wait to move legislation until after Speaker Robin Vos, R-Rochester, convenes a task force later this year on AI in government.

Rep. Scott Krug, R-Nekoosa, chair of the Assembly elections committee, is open to regulating the use of AI in elections, but legislation may not be ready in time for the 2024 election. (Credit: Coburn Dukehart / Wisconsin Watch)

Sen. Mark Spreitzer, D-Beloit, another elections committee member, noted Wisconsin law already prohibits knowingly making or publishing a false representation pertaining to a candidate or referendum which is intended or tends to affect voting at an election.

I think you could read the plain language of that statute and say that a deepfake would violate it, he said. But obviously, whenever you have new technology, I think its worth coming back and making explicitly clear that an existing statute is intended to apply to that new technology.

Scheufele, Anderson, Spreitzer and Smith all said that Wisconsin should go beyond mandating disclosure of AI in ads.

The biggest concern is disinformation coming from actors outside of the organized campaigns and political parties, Spreitzer said. Official entities are easier to regulate, in part because the government already does.

Additional measures will require a robust global debate, Scheufele said. He likened the urgency of addressing AI to nuclear power.

What we never did for nuclear energy is really have a broad public debate about: Should we go there? Should we actually develop nuclear weapons? Should we engage in that arms race? he said. For AI, we may still have that opportunity where we really get together and say, Hey, what are the technologies that were willing to deploy, that were willing to actually make accessible?'

The nonprofit Wisconsin Watch collaborates with WPR, PBS Wisconsin, other news media and the University of Wisconsin-Madison School of Journalism and Mass Communication. All works created, published, posted or disseminated by Wisconsin Watch do not necessarily reflect the views or opinions of UW-Madison or any of its affiliates.

Originally posted here:

AI is starting to affect elections and Wisconsin has yet to take action - PBS Wisconsin

Read More..