Page 1,576«..1020..1,5751,5761,5771,578..1,5901,600..»

Explained: Why Engineering course is losing its charm – News9 LIVE

The AISHE 2020-21 divulged that except Engineering, enrollment rose in all other undergraduate courses. (Photo credit: PTI)

Engineering which was once a sought-after course seems to be losing its charm among students. If the All India Survey on Higher Education (AISHE) 2020-2021 is anything to go by, admissions to Bachelor of Technology (BTech) and Bachelor of Engineering (BE) witnessed a decline between 2016-17 and 2021-21. The survey, released in January this year by the Ministry of Education, revealed that enrollment in Engineering courses declined by 10 per cent from 40.85 lakh in 2016-17 to 36.63 lakh in 2020-21.

The number of admissions in Engineering dropped at the time when the total number of enrollment in higher education went up to nearly 4.14 crore in 2020-21 from 3.85 crore in 2019-20. The AISHE 2020-21 also divulged that except Engineering, enrollment rose in all other undergraduate courses.

In view of the decline in number of admissions in BE and BTech courses, the All India Council for Technical Education (AICTE) imposed a ban on new engineering colleges three years ago. The moratorium on setting up of new engineering institutions began in 2020-2021 and continued till 2022-23.

The overall enrollment in AICTE-approved Engineering colleges saw a decline from 26.95 lakh seats in 2012-13 to 23.66 lakh in 2021-22. The seat vacancy in Engineering colleges across the country stood at 45 per cent in 2020. However, the AICTE has decided to do away with the ban on opening of new engineering colleges in view of the improved number of admissions in core branches in 2022.

As falling number of admissions in Engineering courses does not draw an inspiring picture for those who want to join this field, here are some of the possible reasons behind declining enrollments.

Other than top engineering colleges like Indian Institutes of Technology (IITs) and National Institutes of Technology (NITs), many institutes do not offer encouraging job prospects. Even if they get their students placed in a good company, fresh graduates do not get lucrative salary packages. The recent reports of lay offs by tech giants have also added to students worries. Meta, parent company of Facebook, on March 14 announced that it would let go of its 10,000 employees. It also laid off 11,000 employees in November last year. A week ago, Amazon, e-commerce giant, also decided to cut its workforce by 9,000 employees, on top of previously announced layoffs. Participating in the fresh round of layoffs, Accenture, a tech company, has announced to downsize its employee strength by 19,000.

Lack of job prospects coupled with high course fee makes Engineering less attractive. Those who are not able to afford the fee by themselves take education loan to pursue a degree in Engineering. When they dont get a handsome salary or face the fear of getting laid off, these students reel under the burden of paying equated monthly installments (EMIs).

Most of the institutions have failed to align their curriculum with changing times. They are still following the age-old syllabus which does not seem relevant in this day and age. Apart from this, the syllabus of Engineering courses is bulky. Barring a few top institutions, most colleges do not focus much on practical learning, making the course boring for students.

The dwindling number of enrollments in Engineering courses is a reminder for the government and institutions to bring about a change in technical education in the country.

Read the original here:

Explained: Why Engineering course is losing its charm - News9 LIVE

Read More..

Narro expands civil engineering capacity with dedicated team – Scottish Construction Now

The growth and expansion of Narro has continued with the ongoing enlargement of its dedicated civil engineering team.

The revamped team will take the lead on all drainage, highways, active travel, public realm and infrastructure engineering projects.

Amelia Donovan joins the team as a project engineer, based in the Edinburgh office. Amelia graduated from the University of Edinburgh in 2020 with an MEng in Civil Engineering. Since then, she has developed expertise in highways and transport infrastructure, which will further enhance the skillset within the team.

Amelia joins principal civil engineer and section leader Craig Smith, senior civil engineer Alan de Pellette, senior technician Jack Munro, project engineers Owen Cairns and Reece Edgar, and civils eechnician Stuart McColgan.

Craig Smith said: Im delighted that Amelia is joining the team, shes bringing some excellent experience in roads and transport, which compliments the existing skillset. The addition of Owen, Jack and Stuart within the last six months has also really enhanced our technical resources, allowing us to tackle civil projects of increasing size and scale.

Narro managing director, Ben Adam, added: Over the thirty-six years the company has been in operation, weve built up an excellent reputation for providing quality engineering consultancy services. Civil Engineering has always been a key sector for us, and weve done great projects, for example at Culzean Country Park or the Scottish National Gallery at Princes Street Gardens in Edinburgh. Weve also built on our conservation and refurbishment expertise to restore historic infrastructure such as bridges, harbours and piers.

Weve really been seeing increasing demand in for civil engineering work over the past few years. The growing number of enquiries from the public sector and more projects having a civil engineering aspect made appointing a dedicated team an obvious choice. We currently have over 50 projects with a civil engineering element! Im delighted that the team is growing and Im confident that they will continue to provide excellent support to our clients and partners across all our six office locations.

See the original post here:

Narro expands civil engineering capacity with dedicated team - Scottish Construction Now

Read More..

Aerospace Data Acquisition System Market to Witness Growth … – Digital Journal

New Jersey, United States, Mar 27, 2023 /DigitalJournal/ The data acquisition (DAQ) system refers to a system that can measure and record physical or electrical properties to understand the systems performance. DAQ systems are computer-based measurement systems that help in measuring an electrical or physical phenomenon such as current, voltage, pressure, temperature, or sound. These systems capture and store data from an actual system for further scientific and engineering review.

The global Aerospace Data Acquisition System Market is expected to grow at a Robust CAGR of 5% during the forecasting period of 2022 to 2029.

Get the PDF Sample Copy (Including FULL TOC, Graphs, and Tables) of this report @:

https://a2zmarketresearch.com/sample-request

Aerospace Data Acquisition System Market research is an intelligence report with meticulous efforts undertaken to study the right and valuable information. The data which has been looked upon is done considering both, the existing top players and the upcoming competitors. Business strategies of the key players and the new entering market industries are studied in detail. Well-explained SWOT analysis, revenue share, and contact information are shared in this report analysis. It also provides market information in terms of development and its capacities.

Some of the Top companies Influencing this Market include:

Nuvation Engineering, Curtiss-Wright, Danelec Marine, Honeywell, GE Aviation, L3Harris, Teledyne Technologies, Acr Electronics, Flyht Aerospace Solutions, Phoenix International Holdings, Elbit Systems, AEVEX Aerospace, MTS Aerospace, Dewesoft, ETMC Technologies, Digilogic Systems, Bustec, DynamicSignals, Hi-Techniques

Various factors are responsible for the markets growth trajectory, which are studied at length in the report. In addition, the report lists down the restraints that are posing threat to the global Aerospace Data Acquisition System market. This report consolidates primary and secondary research, which provides market size, share, dynamics, and forecast for various segments and sub-segments considering the macro and micro environmental factors. It also gauges the bargaining power of suppliers and buyers, the threat from new entrants and product substitutes, and the degree of competition prevailing in the market.

Global Aerospace Data Acquisition System market segmentation:

The market is segmented based on the type, product, end users, raw materials, etc. the segmentation helps to deliver a precise explanation of the market

Market Segmentation: By Type

Flight Data RecordersCockpit Data RecordersVoyage Data RecordersOthers

Market Segmentation: By Application

Military AircraftPrivate AircraftCompetitor Analysis

Global Aerospace Data Acquisition System Market research report offers:

For Any Query or Customization:

https://a2zmarketresearch.com/ask-for-customization

This report studies the global Aerospace Data Acquisition System market, analyzes and researches the development status and forecast in North America, Asia Pacific, Europe, the Middle East & Africa, and Latin America. Various key players are discussed in detail and a well-informed idea of their popularity and strategies is mentioned.

The cost analysis of the Global Aerospace Data Acquisition System Market has been performed considering manufacturing expenses, labor cost, and raw materials along with their market concentration rate, suppliers, and the price trend. It also assesses the bargaining power of suppliers and buyers, the threat of new entrants and product substitutes, and the degree of competition prevailing in the market. Other factors such as supply chain, downstream buyers, and sourcing strategy have been assessed to provide a comprehensive and in-depth view of the market.

The report answers questions such as:

Contents

Global Aerospace Data Acquisition System Market Research Report 2022-2029

Chapter 1 Aerospace Data Acquisition System Market Overview

Chapter 2 Global Economic Impact on Industry

Chapter 3 Global Market Competition by Manufacturers

Chapter 4 Global Production, Revenue (Value) by Region

Chapter 5 Global Supply (Production), Consumption, Export, Import by Regions

Chapter 6 Global Production, Revenue (Value), Price Trend by Type

Chapter 7 Global Market Analysis by Application

Chapter 8 Manufacturing Cost Analysis

Chapter 9 Industrial Chain, Sourcing Strategy, and Downstream Buyers

Chapter 10 Marketing Strategy Analysis, Distributors/Traders

Chapter 11 Market Effect Factors Analysis

Chapter 12 Global Aerospace Data Acquisition System Market Forecast

Buy Exclusive Report @:

https://www.a2zmarketresearch.com/checkout

Contact Us:

Roger Smith

1887 WHITNEY MESA DR HENDERSON, NV 89014

[emailprotected]

+1 775 237 4157

Go here to see the original:

Aerospace Data Acquisition System Market to Witness Growth ... - Digital Journal

Read More..

Everything to Know About Artificial Intelligence, or AI – The New York Times

Welcome to On Tech: A.I., a pop-up newsletter that will teach you about artificial intelligence, especially the new breed of chatbots like ChatGPT all in only five days.

Well tackle some of the big themes and questions around A.I. By the end of the week, youll know enough to command the room at a dinner party, or impress your co-workers.

Every day, well give you a quiz and a homework assignment. (A pro tip: Ask the chatbots themselves about how they work, or about concepts you dont understand. Answering such questions is one of their most useful skills. But keep in mind that they sometimes get things wrong.)

Lets start at the beginning.

The term artificial intelligence gets tossed around a lot to describe robots, self-driving cars, facial recognition technology and almost anything else that seems vaguely futuristic.

A group of academics coined the term in the late 1950s as they set out to build a machine that could do anything the human brain could do skills like reasoning, problem-solving, learning new tasks and communicating using natural language.

Progress was relatively slow until around 2012,when a single idea shifted the entire field.

It was called a neural network. That may sound like a computerized brain, but, really, its a mathematical system that learns skills by finding statistical patterns in enormous amounts of data. By analyzing thousands of cat photos, for instance, it can learn to recognize a cat. Neural networks enable Siri and Alexa to understand what youre saying, identify people and objects in Google Photos and instantly translate dozens of languages.

A New Generation of Chatbots

A brave new world. A new crop of chatbotspowered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning todays powerhouses into has-beens and creating the industrys next giants. Here are the bots to know:

ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacationsand translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images(and ace the Uniform Bar Exam).

Bing. Two months after ChatGPTs debut, Microsoft, OpenAIs primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bots occasionally inaccurate, misleading and weird responsesthat drew much of the attention after its release.

Ernie. The search giant Baidu unveiled Chinas first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flopafter a promised live demonstration of the bot was revealed to have been recorded.

The next big change: large language models. Around 2018, companies like Google, Microsoft and OpenAI began building neural networks that were trained on vast amounts of text from the internet, including Wikipedia articles, digital books and academic papers.

Somewhat to the experts surprise, these systems learned to write unique prose and computer code and carry on sophisticated conversations. This is sometimes called generative A.I. (More on that later this week.)

The result: ChatGPT and other chatbotsare now poised to change our everyday lives in dramatic ways. Over the next four days, we will explain the technology behind these bots, help you understand their abilities and limitations, and where they are headed in the years to come.

Tuesday: How do chatbots work?

Wednesday: How can they go wrong?

Thursday: How can you use them right now?

Friday: Where are they headed?

Youve got some homework to do! One of the best ways to understand A.I. is to use it yourself.

The first step is to sign up for these chatbots. Bing and Bard chatbots are being rolled out slowly, and you may need to get on their waiting lists for access. ChatGPT currently has no waiting list, but requires setting up a free account.

Once youre ready, just type your words (known as a prompt) into the text box, and the chatbot will reply. You may want to play around with different prompts and see if you get a different response.

Todays assignment: Ask ChatGPT or one of its competitors to write a cover letter for your dream job like, say, a NASA astronaut.

We want to see the results! Share it as a comment and see what other people have submitted.

Weve been covering developments in artificial intelligence for a long time, and we've both written recent books on the subject. But this moment feels distinctly different from whats come before. We recently chatted on Slack with our editor, Adam Pasick, about how were each approaching this unique point in time.

Cade:The technologies driving the new wave of chatbots have been percolating for years. But the release ofChatGPT really opened peoples eyes. It set off a new arms race across Silicon Valley. Tech giants like Google and Meta had been reluctant to release this technology, but now theyre racing to compete with OpenAI.

Kevin:Yeah, its crazy out there I feel like Ive got vertigo. Theres a natural inclination to be skeptical of tech trends. Wasnt crypto supposed to change everything? Werent we all just talking about the metaverse? But it feels different with A.I., in part because millions of users are already experiencing the benefits. Ive interviewed teachers, filmmakers and engineers who are using tools like ChatGPT every day. And it came out only four months ago!

Adam: How do you balance the excitement out there with caution about where this could go?

Cade:A.I. is not as powerful as it might seem. If you take a step back, you realize that these systems cant duplicate our common sense or reasoning in full. Remember the hype around self-driving cars: Were those cars impressive? Yes, remarkably so. Were they ready to replace human drivers? Not by a long shot.

Kevin:I suspect that tools like ChatGPT are actually more powerful than they seem. We havent yet discovered everything they can do. And, at the risk of getting too existential, Im not sure these models work so differently than our brains. Isnt a lot of human reasoning just recognizing patterns and predicting what comes next?

Cade:These systems mimic humans in some ways but not in others. They exhibit what we can rightly call intelligence. But as OpenAIs chief executive told me, this is an alien intelligence. So, yes, they will do things that surprise us. But they can also fool us into thinking they are more like us than they really are. They are both powerful and flawed.

Kevin: Sounds like some humans I know!

Question 1 of 3

Start the quiz by choosing your answer.

Neural network: A mathematical system, modeled on the human brain, that learns skills by finding statistical patterns in data. It consists of layers of artificial neurons: The first layer receives the input data, and the last layer outputs the results. Even the experts who create neural networks dont always understand what happens in between.

Large language model: A type of neural network that learns skills including generating prose, conducting conversations and writing computer code by analyzing vast amounts of text from across the internet. The basic function is to predict the next word in a sequence, but these models have surprised experts by learning new abilities.

Generative A.I.: Technology that creates content including text, images, video and computer code by identifying patterns in large quantities of training data, and then creating new, original material that has similar characteristics. Examples include ChatGPTfor text and DALL-E and Midjourney for images.

Click here for more glossary terms.

Read the original here:
Everything to Know About Artificial Intelligence, or AI - The New York Times

Read More..

Humans In The GenAI Loop – Forbes

an image designed with artificial intelligence by Berlin-based digital creator Julian van Dieken (C) inspired by Johannes Vermeer's painting "Girl with a Pearl Earring" at the Mauritshuis museum in The Hague on March 9, 2023. - Julian van Dieken's work made using artificial intelligence (AI) is part of the special installation of fans' recreations of Johannes Vermeer's painting "Girl with a Pearl Earring" on display at the Mauritshuis museum. (Photo by Simon Wohlfahrt / AFP) / RESTRICTED TO EDITORIAL USE - MANDATORY MENTION OF THE ARTIST UPON PUBLICATION - TO ILLUSTRATE THE EVENT AS SPECIFIED IN THE CAPTION (Photo by SIMON WOHLFAHRT/AFP via Getty Images)AFP via Getty Images

Generative AI, the technology behind ChatGPT, is going supernova, as astronomers say, outshining other innovations for the moment. But despite alarmist predictions of AI overlords enslaving mankind, the technology still requires human handlers and will for some time to come.

While AI can generate content and code at a blinding pace, it still requires humans to oversee the output, which can be low quality or simply wrong. Whether it be writing a report or writing a computer program, the technology cannot be trusted to deliver accuracy that humans can rely on. Its getting better, but even that process of improvement depends on an army of humans painstakingly correcting the AI models mistakes in an effort to teach it to behave.

Humans in the loop is an old concept in AI. It refers to the practice of involving human experts in the process of training and refining AI systems to ensure that they perform correctly and meet the desired objectives.

In the early days of AI research, computer scientists were focused on developing rule-based systems that could reason and make decisions based on pre-programmed rules. However, these systems were tedious to construct requiring experts to write down the rules and were limited by the fact that they could only operate within the constraints of the rules that were explicitly programmed into them.

As AI technology advanced, researchers began to explore new approaches, such as machine learning and neural networks, that enabled computers to learn on their own from large volumes of training data.

But the dirty little secret behind the first wave of such applications, which are still the dominant form of AI used today, is that they depend on hand-labeled data. Tens of thousands of people continue to toil at the mind-numbing task of putting labels on images, text and sound to teach supervised AI systems what to look or listen for.

Then along came generative AI, which does not require labeled data. It teaches itself by consuming vast amounts of data and learning the relationships within that data, much as an animal does in the wild. Large language models, which use generative AI, learn the world through the lens of text and the world has been amazed by these models ability to compose human-like answers and even engage in human-like conversations.

ChatGPT, a large language model trained by OpenAI, has awed the world with the depth of its knowledge and the fluency of its responses. Nevertheless, its utility is limited by so-called hallucinations, mistakes in the generated text that are semantically or syntactically plausible but are, in fact, incorrect or nonsensical.

The answer? Humans, again. OpenAI is working to address ChatGPT's hallucinations through reinforcement learning with human feedback (RLHF), employing, yes, large number of workers.

RLHF has been employed to shape ChatGPT's behavior, where the data collected during its interactions are used to train a neural network that functions as a "reward predictor." The reward predictor evaluates ChatGPT's outputs and predicts a numerical score that represents how well those actions align with the system's desired behavior. A human evaluator periodically checks ChatGPT's responses and selects those that best reflect the desired behavior. This feedback is used to adjust the reward-predictor neural network, which is then utilized to modify the behavior of the AI model.

Ilya Sutskever, OpenAI's chief scientist and one of the creators of ChatGPT, believes that the problem of hallucinations will disappear with time as large language models learn to anchor their responses in reality. He suggests that the limitations of ChatGPT that we see today will diminish as the model improves. However, humans in the loop are likely to remain a feature of the amazing technology for years to come.

This is why generative AI coding assistants like GitHubs CoPilot and Amazons CodeWhisperer are just that, assistants working in concert with experienced coders who can correct their mistakes or pick the best option among a handful of coding suggestions. While AI can generate code at a rapid pace, humans bring creativity, context, and critical thinking skills to the table.

True autonomy in AI depends on trust and reliability of AI systems, which may come as those systems improve. But for now, humans are the overlords and trusted results depend on collaboration between humans and AI.

Sylvain Duranton is the Global Leader of BCG X and a member of BCGs Executive Committee. BCG X is the tech build & design unit of BCG. Turbocharging BCGs deep industry and functional expertise, BCG X brings together advanced tech knowledge and ambitious entrepreneurship to help organizations enable innovation at scale. With nearly 3,000 technologists, scientists, programmers, engineers, and human-centered designers located across 80+ cities, BCG X builds and designs platforms, software to address the worlds most important challenges and opportunities. Teaming across practices, and in close collaboration with clients, their end-to-end global team unlocks new possibilities. Together theyre creating the bold and disruptive products, services, and businesses, of tomorrow. Duranton was the Global leader and founder of BCG GAMMA, BCGs AI and Data + Analytics Unit.

Read more here:
Humans In The GenAI Loop - Forbes

Read More..

Godfather of AI Says There’s a Minor Risk It’ll Eliminate Humanity – Futurism

"It's not inconceivable."Nonzero Chance

Geoffrey Hinton, a British computer scientist, is best known as the "godfather of artificial intelligence." His seminal work on neural networks broke the mold by mimicking the processes of human cognition, and went on to form the foundation of machine learning models today.

And now, in a lengthy interview with CBS News, Hinton shared his thoughts on the current state of AI, which he fashions to be in a "pivotal moment," with the advent of artificial general intelligence (AGI) looming closer than we'd think.

"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI," Hinton said. "And now I think it may be 20 years or less."

AGI is the term that describes a potential AI that could exhibit human or superhuman levels of intelligence. Rather than being overtly specialized, an AGI would be capable of learning and thinking on its own to solve a vast array of problems.

For now, omens of AGI are often invoked to drum up the capabilities of current models. But regardless of the industry bluster hailing its arrival or how long it might really be before AGI dawns on us, Hinton says we should be carefully considering its consequences now which may include the minor issue of it trying to wipe out humanity.

"It's not inconceivable, that's all I'll say," Hinton told CBS.

Still, Hinton maintains that the real issue on the horizon is how AI technology that we already have AGI or not could be monopolized by power-hungry governments and corporations (see: the former non-profit and now for-profit OpenAI).

"I think it's very reasonable for people to be worrying about these issues now, even though it's not going to happen in the next year or two," Hinton said in the interview. "People should be thinking about those issues."

Luckily, by Hinton's outlook, humanity still has a little bit of breathing room before things get completely out of hand, since current publicly available models are mercifully stupid.

"We're at this transition point now where ChatGPT is this kind of idiot savant, and it also doesn't really understand about truth, " Hinton told CBS, because it's trying to reconcile the differing and opposing opinions in its training data. "It's very different from a person who tries to have a consistent worldview."

But Hinton predicts that "we're going to move towards systems that can understand different world views" which is spooky, because it inevitably means whoever is wielding the AI could use it push a worldview of their own.

"You don't want some big for-profit company deciding what's true," Hinton warned.

More on AI: AI Company With Zero Revenue Raises $150 Million

See more here:
Godfather of AI Says There's a Minor Risk It'll Eliminate Humanity - Futurism

Read More..

ChatGPT in the Humanities Panel: Researchers Share Concerns, Prospects of Artificial Intelligence in Academia – Cornell University The Cornell Daily…

Does the next Aristotle, Emily Dickinson or Homer live on your computer? A group of panelists explored this idea in a talk titled Chat GPT and the Humanities on Friday in the A.D. White Houses Guerlac Room.

ChatGPTs ability to produce creative literature was one of the central topics explored in the talk as the discourse on the use of artificial intelligence software in academic spheres continues to grow.

In the panel, Prof. Morten Christiansen, psychology, Prof. Laurent Dubreuil, comparative literature, Pablo Contreras Kallens grad and Jacob Matthews grad explored the benefits and consequences of utilizing artificial intelligence within humanities research and education.

The forum was co-sponsored by the Society for the Humanities, the Humanities Lab and the New Frontier Grant program.

The Society for the Humanities was established in 1966 and connects visiting fellows, Cornell faculty and graduate students to conduct interdisciplinary research connected to an annual theme. This years focal theme is Repair which refers to the conservation, restoration and replication of objects, relations and histories.

All four panelists are members of the Humanities Lab, which works to provide an intellectual space for scholars to pursue research relating to the interaction between the sciences and the humanities. The lab was founded by Dubreuil in 2019 and is currently led by him.

Christiansen and Dubreuil also recently received New Frontier Grants for their project titled Poetry, AI and the Mind: A Humanities-Cognitive Science Transdisciplinary Exploration, which focuses on the application of artificial intelligence to literature, cognitive science and mental and cultural diversity. For well over a year, they have worked on an experiment comparing humans poetry generation to that of ChatGPT, with the continuous help of Contreras Kallens and Matthews.

Before the event began, attendees expressed their curiosity and concerns about novel AI technology.

Lauren Scheuer, a writing specialist at the Keuka College Writing Center and Tompkins County local, described worries about the impact of ChatGPT on higher education.

Im concerned about how ChatGPT is being used to teach and to write and to generate content, Scheuer said.

Sarah Milliron grad, who is pursuing a Ph.D. in psychology, also said that she was concerned about ChatGPTs impact on academia as the technology becomes more widely used.

I suppose Im hoping [to gain] a bit of optimism [from this panel], Milliron said. I hope that they address ways that we can work together with AI as opposed to [having] it be something that we ignore or have it be something that we are trying to get rid of.

Dubreuil first explained that there has been a recent interest in artificial intelligence due to the impressive performance of ChatGPT and its successful marketing campaign.

All scholars, but especially humanities, are currently wondering if we should take into account the new capabilities of automated text generators, Dubreuil said.

Dubreuil expressed that scholars have varying concerns and ideas regarding ChatGPT.

Some [scholars] believe we should counteract [ChatGPTs consequences] by means of new policies, Dubreuil said. Other [scholars] complained about the lack of morality or the lack of political apropos that is exhibited by ChatGPT. Other [scholars] say that there is too much political apropos and political correctness.

Dubreuil noted that other scholars prophesy that AI could lead to the fall of humanity.

For example, historian Yuval Harari recently wrote about the 2022 Expert Survey on Progress in AI, which found that out of more than 700 surveyed top academics and researchers, half said that there was at least a 10 percent chance of human extinction or similarly permanent and severe disempowerment due to future AI systems.

Contreras Kallens then elaborated on their poetry experiment, which utilized what he referred to as fragment completion essentially, ChatGPT and Cornell undergraduates were both prompted to continue writing from two lines of poetry from an author such as Dickinson.

Contreras Kallens described that ChatGPT generally matched the poetry quality of a Cornell undergraduate, while expectedly falling short of the original authors writing. However, the author recognition program they used actually confused the artificial productions with the original authors work.

The final part of the project, which the group is currently refining, will measure whether students can differentiate between whether a fragment was completed by the original author, an undergraduate or by ChatGPT.

When describing the importance of this work, Contreras Kallens explained the concept of universal grammar a linguistics theory that suggests that people are innately biologically programmed to learn grammar. Thus, ChatGPTs being able to reach the writing quality of many humans challenges assumptions about technologys shortcomings.

[This model] invites a deeper reconsideration of language assumptions or language acquisition processing, Contreras Kallens said. And thats at least interesting.

Matthews then expressed that his interest in AI does not lie in its generative abilities but in the possibility of representing text numerically and computationally.

Often humanists are dealing with large volumes of text [and] they might be very different, Matthews said. [It is] fundamental to the humanities that we debate [with each other] about what texts mean, how they relate to one another were always putting different things into relation with one another. And it would be nice sometimes to have a computational or at least quantitative basis that we could maybe talk about, or debate or at least have access to.

Matthews described that autoregressive language models which refer to machine learning models that use past behavior models to predict the following word in a text reveal the perceived similarity between certain words.

Through assessing word similarity, Matthews found that ChatGPT contains gendered language bias, which he said reflects the bias in human communication.

For example, Matthews inputted the names Mary and James the most common female and male names in the United States along with Sam, which was used as a gender-neutral name. He found that James is closer to the occupations of lawyer, programmer and doctor than the other names, particularly Mary.

Matthews explained that these biases were more prevalent in previous language modeling systems, but that the makers of GPT-3.5 the embedding model of ChatGPT, as opposed to GPT-3, which is the model currently available to the public have acknowledged bias in their systems.

Its not just that [these models] learn language theyre also exposed to biases that are present in text, Matthews said. This can be visible in social contexts especially, and if were deploying these models, this has consequences if theyre used in decision making.

Matthews also demonstrated that encoding systems can textually analyze and compare literary works, such as those by Shakespeare and Dickinson, making them a valuable resource for humanists, especially regarding large texts.

Humanists are already engaged in thinking about these types of questions [referring to the models semantics and cultural analyses], Matthews said. But we might not have the capacity or the time to analyze the breadth of text that we want to and we might not be able to assign or even to recall all the things that were reading. So if were using this in parallel with the existing skill sets that humanists have, I think that this is really valuable.

Christiansen, who is part of a new University-wide committee looking into the potential use of generative AI, then talked about the opportunities and challenges of the use of AI in education and teaching.

Christiansen described that one positive pedagogical use of ChatGPT is to have students ask the software specific questions and then for the students to criticize the answers. He also explained that ChatGPT may help with the planning process of writing, which he noted many students frequently discount.

I think also, importantly, that [utilizing ChatGPT in writing exercises] can actually provide a bit of a level playing field for second language learners, of which we have many here at Cornell, Christiansen said.

Christiansen added that ChatGPT can act as a personal tutor, help students develop better audience sensitivity, work as a translator and provide summaries.

However, these models also have several limitations. For instance, ChatGPT knows very little about any events that occurred after September 2021 and will be clueless about recent issues, such as the Ukraine war.

Furthermore, Christiansen emphasized that these models can and will hallucinate which refers to their making up information, including falsifying references. He also noted that students could potentially use ChatGPT to violate academic integrity.

Overall, Dubreuil expressed concern for the impact of technologies such as ChatGPT on innovation. He explained that ChatGPT currently only reorganizes data, which falls short of true invention.

There is a wide range between simply incremental inventions and rearrangements that are such that they not only rearrange the content, but they reconfigure the given and the way the given was produced its meanings, its values and its consequences, Dubreuil said.

Dubreuil argued that if standards for human communication do not require invention, not only will AI produce work that is not truly creative, but humans may become less inventive as well.

It has to be said that through social media, especially through our algorithmic life, these days, we may have prepared our own minds to become much more similar to a chatbot. We may be reprogramming ourselves constantly and thats the danger, Dubreuil said. The challenge of AI is a provocation toward reform.

Correction, March 27, 2:26 p.m.: A previous version of this article incorrectly stated the time frame about which ChatGPT is familiar and the current leaders of the Humanities Lab. In addition, minor clarification has been added to the description of Christiansen and Dubreuils study on AI poetry generation. The Sun regrets these errors, and the article has been corrected.

Read this article:
ChatGPT in the Humanities Panel: Researchers Share Concerns, Prospects of Artificial Intelligence in Academia - Cornell University The Cornell Daily...

Read More..

The Future is Now: Exploring the Importance of Artificial Intelligence – The Geopolitics

Artificial intelligence (AI) is a rapidly growing field that has captured the attention of scientists, engineers, business leaders, and policymakers worldwide. It refers to the development of computer systems that can perform tasks that typically require human intelligence, such as learning, problem-solving, perception, and decision-making. AI has the potential to transform various industries and sectors, including healthcare, transportation, education, manufacturing, and finance, among others. In this article, we will explore the importance of artificial intelligence in the future and its potential benefits and challenges.

One of the most significant advantages of artificial intelligence is its ability to automate routine and repetitive tasks, allowing humans to focus on more complex and creative work. For instance, AI-powered robots and machines can perform tasks like assembling products, packaging goods, and transporting materials with greater speed, accuracy, and efficiency than humans. This can help businesses increase productivity, reduce costs, and improve the quality of their products and services.

Another important benefit of artificial intelligence is its ability to analyze and interpret vast amounts of data, enabling organizations to gain valuable insights into customer behavior, market trends, and business operations. By using advanced algorithms and machine learning techniques, AI systems can identify patterns, correlations, and anomalies in data that would be challenging for humans to detect. This can help businesses make data-driven decisions, optimize their processes, and improve their overall performance.

Moreover, artificial intelligence has the potential to revolutionize healthcare by improving the accuracy and efficiency of medical diagnoses, treatments, and research. For example, AI systems can analyze medical images, such as X-rays and CT scans, to detect signs of diseases like cancer, heart disease, and Alzheimers, with greater accuracy than human doctors. AI-powered chatbots and virtual assistants can also provide patients with personalized health advice, monitor their symptoms, and remind them to take their medication. Additionally, AI can help accelerate drug discovery and development by predicting the efficacy and safety of new drugs and identifying potential side effects.

In the field of education, artificial intelligence can help personalize learning and improve student outcomes by providing tailored instruction and feedback based on individual needs and preferences. For example, AI systems can analyze students performance data and adjust their learning paths and content accordingly. AI-powered chatbots can also provide students with instant answers to their questions and feedback on their assignments. Moreover, AI can help educators develop more effective teaching strategies by providing insights into student engagement, motivation, and learning preferences.

However, along with its potential benefits, artificial intelligence also poses significant challenges and risks that need to be addressed. One of the main concerns is the potential impact of AI on employment, as automation and AI systems may replace human workers in various industries and occupations. While AI can create new job opportunities in areas like software development, data analysis, and robotics, it may also lead to job losses in other sectors, particularly those that involve routine and repetitive tasks.

Another challenge of artificial intelligence is its potential to perpetuate and amplify social biases and inequalities. AI systems are only as unbiased as the data they are trained on, and if the data contain biased or discriminatory patterns, the AI systems will replicate and reinforce them. This can lead to unfair and discriminatory outcomes in areas like hiring, lending, and law enforcement. Therefore, it is essential to ensure that AI systems are developed and deployed ethically and with diversity and inclusivity in mind.

Moreover, artificial intelligence also raises concerns about privacy, security, and accountability. AI systems often collect and process sensitive personal data, such as medical records, financial information, and social media activity, raising concerns about data breaches, identity theft, and surveillance. Additionally, AI systems may make decisions that have significant consequences for individuals and society, such as determining eligibility for loans or insurance, or recommending criminal sentences. Therefore, it is crucial to ensure that AI systems are transparent, accountable, and subject to ethical and legal oversight.

Artificial intelligence is a powerful and transformative technology that has the potential to bring significant benefits to various industries and sectors. By automating routine tasks, analyzing data, and improving decision-making, AI can help increase productivity, reduce costs, and improve the quality of products and services. In healthcare, education, and other fields, AI can improve outcomes and accelerate progress. However, AI also poses significant challenges and risks that need to be addressed, such as job displacement, bias and discrimination, privacy, security, and accountability. Therefore, it is essential to ensure that AI is developed and deployed ethically, transparently, and with diversity and inclusivity in mind. By doing so, we can harness the power of AI to create a more prosperous, equitable, and sustainable future for all.

[Gerd Altmann / Pixabay]

Carl Taylor a tech author with over 12 years of experience in the industry. He has written numerous articles on topics such as artificial intelligence, machine learning, and data science. The views and opinions expressed in this article are those of the author.

See the rest here:
The Future is Now: Exploring the Importance of Artificial Intelligence - The Geopolitics

Read More..

Most Jobs Soon To Be Influenced By Artificial Intelligence, Research Out Of OpenAI And University Of Pennsylvania Suggests – Forbes

As artificial intelligence opens up and becomes democratized through platforms offering generative AI, its likely to alter tasks within at least 80% of all jobs, a new analysis suggests. Jobs requiring college education will see the highest impacts, and in many cases, at least half of peoples tasks may be affected by AI. Its extremely important to add that affected occupations will be significantly influenced or augmented by generative AI, not replaced.

Thats the word from a paper published by a team of researchers from OpenAI, OpenResearch, and the University of Pennsylvania. The researchers included Tyna Eloundou with OpenAI, Sam Manning with OpenResearch and OpenAI, Pamela Mishkin with OpenAI, and Daniel Rock, assistant professor at the University of Pennsylvania, also affiliated with OpenAI and OpenResearch.

The research looked at the potential implications of GPT (Generative Pre-trained Transformer) models and related technologies on occupations, assessing their exposure to GPT capabilities. Our findings indicate that approximately 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of GPTs, while around 19% of workers may see at least 50% of their tasks impacted, Eloundou and her colleagues estimate. The influence spans all wage levels, with higher-income jobs potentially facing greater exposure particularly jobs requiring college degrees. At the same time, they observe, considering each job as a bundle of tasks, it would be rare to find any occupation for which AI tools could do nearly all of the work.

The researchers base their study on GPT-4, and use the terms large language models (LLMs) and GPTs interchangeably.

Their findings suggest that programming and writing skills are more likely to be influenced by generative AI. On the other hand, occupations or tasks involving science and critical thinking skills are less likely to be influenced. Occupations that are seeing or will see a high degree of AI-based influence and augmentation (again, emphasis on influence and augment) include the following:

GPTs are improving in capabilities over time with the ability to complete or be helpful for an increasingly complex set of tasks and use-cases, Eloundou and her co-authors point out. They caution, however, that the definition of a task is very fluid. It is unclear to what extent occupations can be entirely broken down into tasks, and whether this approach systematically omits certain categories of skills or tasks that are tacitly required for competent performance of a job, they add. Additionally, tasks can be composed of sub-tasks, some of which are more automatable than others.

Theres more implications to AI than simply taking over tasks, of course. While the technical capacity for GPTs to make human labor more efficient appears evident, it is important to recognize that social, economic, regulatory, and other factors will influence actual labor productivity outcomes, the team states. There will be broader implications for AI as it progresses, including their potential to augment or displace human labor, their impact on job quality, impacts on inequality, skill development, and numerous other outcomes.

Still, accurately predicting future LLM applications remains a significant challenge, even for experts, Eloundou and her co-authors caution. The discovery of new emergent capabilities, changes in human perception biases, and shifts in technological development can all affect the accuracy and reliability of predictions regarding the potential impact of GPTs on worker tasks and the development of GPT-powered software.

An important takeaway from this study is that generative AI not to mention AI in all forms is reshaping the workplace in ways that currently cannot be imagined. Yes, some occupations may eventually disappear, but those that can harness the productivity and power of AI to create new innovations and services that improve the lives of customers or people will be well-placed for the economy of the mid-to-late 2020s and beyond.

I am an author, independent researcher and speaker exploring innovation, information technology trends and markets. I served as co-chair of the AI Summit in 2021 and 2022, and have also participated in the IEEE International Conference on Edge Computing and the International SOA and Cloud Symposium series.I am also a co-author of the SOA Manifesto, which outlines the values and guiding principles of service orientation in business and IT.I also regularly contribute to Harvard Business Review and CNET on topics shaping business and technology careers.

Much of my research work is in conjunction with Forbes Insights and Unisphere Research/ Information Today, Inc., covering topics such as artificial intelligence, cloud computing, digital transformation, and big data analytics.

In a previous life, I served as communications and research manager of the Administrative Management Society (AMS), an international professional association dedicated to advancing knowledge within the IT and business management fields. I am a graduate of Temple University.

Read more:
Most Jobs Soon To Be Influenced By Artificial Intelligence, Research Out Of OpenAI And University Of Pennsylvania Suggests - Forbes

Read More..

How will artificial intelligence affect rural communities? – Alton Telegraph

The hot topic of the day seems to Artificial Intelligence or AI for short. More specifically ChatGPT. When it comes to this topic, I am reminded of a quote by Thomas Jefferson, when he said, I'm a great believer in luck, and I find the harder I work the more I have of it. When it comes to AI, one cant help but ask the question, how will AI impact smaller or rural communities in the future?

Regardless of where you stand on this, make no mistake, it will take hard work and not luck to maximize this tool. As for luck, Guy Tasaka, a good friend of mine, and co-founder of MediaFlowAI, recently said when discussing AI, You can be on the train, or under the train.

The impact of AI on smaller communities will depend on various factors such as the availability of resources, infrastructure, and the community's readiness to adopt new technologies. However, there are several potential positive and negative impacts that AI could have on smaller communities. Here are just a few.

On the positive side, AI will allow for much improved access to healthcare. AI can help diagnose diseases more accurately and quickly, especially in areas where there is a shortage of healthcare professionals. AI will lead to increased efficiency in agriculture, it will help farmers optimize crop yields, reduce waste, and increase profits. AI can lead to a much better education, it can help create personalized learning experiences for students, regardless of their location or socioeconomic status. AI can lead to more job creation, it could potentially create new job opportunities in fields such as data analysis, software development, and robotics.

On the negative side of the equation, like many things, it can be a two-edged sword. While AI can create jobs as we mentioned above, it can also potentially automate certain types of jobs, thus leading to job loss in certain industries. AI can widen inequality, this because smaller communities may not have access to the same resources and infrastructure needed to fully adopt AI, potentially leading to a wider digital divide.

AI can intrude on ones privacy, it requires vast amounts of data to operate effectively, and this data could potentially be used for purposes that do not align with community values. AI can create a dependence on technology. When we have an over-reliance on AI, it can potentially lead to a loss of critical thinking skills and decision-making abilities.

Overall, as one might expect, the impact of AI on smaller communities will depend on how it is understood, implemented, and integrated into the community. It is important that communities carefully consider the potential benefits and risks before they select which aspects of AI to adopt or embrace. This allows them to adopt AI where it makes sense, thus ensuring the potential benefits outweigh the potential risks.

Lastly, understand that fear of something tends to paralyze. We cant afford to fear AI, there are far too many positive aspects to AI. It is important that in lieu of ignoring or fearing AI, we strive to understand the many values and positive uses that can be derived from it. Ignoring or fearing AI simple places us under the train. Learning to understand, utilize and even embrace it assures we have the right tools to succeed.

John Newby, from SW Missouri, is a nationally recognized Columnist, Speaker, & Publisher. He consults with Community, Business & Media. His Building Main Street, not Wall Street, column is read by 60+ communities around the country. As founder of Truly-Local, he assists communities, media and business leaders in building synergies that create vibrant communities. He can be reached at: John@Truly-Local.org.

See the rest here:
How will artificial intelligence affect rural communities? - Alton Telegraph

Read More..