Page 568«..1020..567568569570..580590..»

From Troops to Tech Leaders: National University Paves the Way for Service Members to Transition into AI and Data … – PR Web

With prestigious National Science Foundation grant, Veteran-founded National University announces new bachelor's degree helping active-duty military and Veterans up- and reskill for careers in data science and AI

SAN DIEGO, Dec. 21, 2023 /PRNewswire-PRWeb/ -- National University, a nonprofit and Veteran-founded Minority Serving Institution that serves 50,000 degree-seeking students and 80,000 workforce and professional development students annually, today announced the development of a new B.S. Data Science degree aimed at helping military service members and Veterans transition into the in-demand fields of data science and artificial intelligence (AI) technology. This effort is made possible through a $500,000 grant from the National Science Foundation (NSF) and will play a vital role in addressing the growing demand for professionals in these rapidly evolving sectors.

"Even as we grapple with understanding the vast and still emerging capabilities of artificial intelligence, we face an equally important obligation to ensure that this new AI and technology-driven economy is accessible for every learner and worker, including those who have served our nation in uniform," said Dr. Jodi Reeves, National University department chair of data science and the associate director for education, diversity, and outreach for TILOS. "The U.S. armed services have long been at the forefront of artificial intelligence and data innovation, so we have a profound opportunity to align the talents of dedicated service members in transition and Veterans to meet the needs of a fast-changingand increasingly tech-driveneconomy where these same technologies will play an outsized role."

Every year, approximately 500,000 members of the U.S. military depart active duty and begin the transition to the civilian workforce. The initiative comes as a response to the escalating demand for skilled individuals in data science and AI technology, with data science being the most in-demand job in the industry. As these fields continue to shape industries across the globe, the demand for experts in this domain is at an all-time high. According to the Bureau of Labor Statistics, data scientist roles are projected to grow 35% between 2022 and 2032, at a much faster rate than the average for all other occupations.

In 2021, National University was selected to be part of a prestigious team awarded a $20 million grant from the National Science Foundation (NSF) for its contribution to The Institute for Learning-enabled Optimization at Scale (TILOS), an AI institute led by the University of California, San Diego, alongside partners such as Yale University, MIT, the University of Pennsylvania, and the University of Texas at Austin. Through the grant, National University will receive $100,000 annually for five years to support faculty members developing new curriculum, in order to help the university serve its diverse student demographics, including working adults and members of the military community.

The AI reskilling program is one of several programs and initiatives created by National University in response to the growing demand for skilled workers who are passionate about the field of data science and AI technology. Since the AI institute's inception, the AI specialization of M.S. Data Science has grown to be the largest specialization at National University, with 56% of the students being military affiliated. The new B.S. of Data Science degree will include concentrations in AI and machine learning, cybersecurity analytics, and bioinformatics. New courses begin in February 2024.

"To make good on our commitment to serving military and Veteran students to the best of our ability, we need to continuously find ways to bridge the divide between civilian systems of education and employment and our modern military we must evolve the programs we offer, and the ways in which we serve this unique population of learners," said Meg O'Grady, Senior Vice President, Military and Government Programs at National University. "This is about creating inclusive pathways to careers in data, technology and artificial intelligence for Veterans and service members in transitionand finding new ways to align military skills and credentials with the needs of the emerging AI economy."

Throughout its 50-year history, National University has established a strong reputation for its focus on serving military-connected students and Veterans. Today, its student population reflects the shifting and highly diverse demographics of higher education today. Approximately 70 percent of its students take the majority of their classes online. More than 25 percent identify as Hispanic, and 10 percent identify as Black. More than 80 percent of undergraduates are transfer students. The average age of its students is 33. And about 1 in 4 students are active-duty service members or Veterans.

Established in 1971 by retired U.S. Navy Capt. David Chigos, the university also has a rich history of commitment to military personnel. Recognized as a 2022-2023 Military Friendly School and a participant in the Yellow Ribbon Program, National University proudly offers over 190 degree programs, tailored to meet the needs of active-duty military members, Veterans, and their dependents.

National University's four-week course structure is designed to accommodate the unique demands of military life, enabling students to pursue their degrees without disrupting training, service and deployment schedules. In addition, the university's Veteran Center assists in the transition from military to civilian life, offering guidance and support services. National University also utilizes transfer-friendly policies that enable students to leverage previously earned college credits, professional certifications, and military training.

To learn more about National University's offerings, visit our website at NU.edu.

About National University National University, a Veteran-founded nonprofit, has been dedicated to meeting the needs of hard-working adults by providing accessible, affordable higher education opportunities since 1971. As San Diego's largest private nonprofit university, NU offers over 190+ online and on-campus programs with flexible four-week and eight-week classes and one-to-one graduate education models designed to help students reach their goals while balancing busy lives. Since its founding, the NU community has grown to 130,000 learners served per year50,000 degree-seeking students and 80,000 workforce and professional development studentsand 230,000 alumni around the globe, many of whom serve in helping industries such as business, education, health care, cybersecurity, and law and criminal justice. To learn more about National University's new possibilities in education including next-generation education, credential-rich education, and whole human education, visit NU.edu.

Media Contact

Ashleigh Webb, National University, 760-889-3494, [emailprotected], https://www.nu.edu/

Twitter

SOURCE National University

Read the original:

From Troops to Tech Leaders: National University Paves the Way for Service Members to Transition into AI and Data ... - PR Web

Read More..

React Props Explained With Examples – Built In

React has a different approach to data flow and manipulation than other frameworks, and thats why it can be difficult in the beginning to understand concepts like props, state and others.

Props is a special keyword in React that stands for properties and is used for passing data from one component to another. Data with props are passed in a unidirectional flow from parent to child.

Were going to focus on Reacts Props feature and how to use it. Props is a special keyword in React that stands for properties, and its used for passing data from one component to another.

To understand how props work, first, you need to have a general understanding of the concept of React components. Well cover that and more in this article.

React is a component-based library that divides the UI into little reusable pieces. In some cases, those components need to communicate or send data to each other, and the way to pass data between components is by using props.

As I shared, props is a keyword in React that passes data from one component to another. But the important part here is that data with props are being passed in a unidirectional flow. This means its passed one way from parent to child.

Props data is read-only, which means that data coming from the parent shouldnt be changed by child components.

Now, lets see how to use props with an example.

An error occurred.

More on ReactA Guide to React Hooks With Examples

I will be explaining how to use props step-by-step. There are three steps to using Reactprops:

In this example, we have a ParentComponent including another ChildComponent:

And this is our ChildComponent:

The problem here is that when we call the ChildComponent multiple times, it renders the same string again and again:

But what we like to do here is to get dynamic outputs, because each child component may have different data. Lets see how we can solve this issue by using props.

We already know that we can assign attributes and values to HTML tags:

Likewise, we can do the same for React components. We can define our own attributes and assign values with interpolation { }:

Here, Im declaring a textattribute to the ChildComponent and then assign a string value: Im the 1st child.

Now, the ChildComponent has a property and a value. Next, we need to pass it via props.

Lets take the Im the 1st child! string and pass it by using props.

Passing props is very simple. Just as we pass arguments to a function, we pass props into a React component and props brings all the necessary data.Arguments passed to a function:

Arguments passed to a React component:

Weve created an attribute and its value, then we passed it through props, but we still cant see it because we havent rendered it yet.

A prop is an object. In the final step, we will render the props object by using string interpolation:{props}.

But first, log props to console and see what it shows:

As you can see, props returns an object. In JavaScript, we can access object elements with dot notation. So, lets render our text property with an interpolation:

And thats it. Weve rendered the data coming from the parent component.Before closing, lets do the same for other child components:

As we can see, each ChildComponent renders its own prop data. This is how you can use props for passing data and converting static components into dynamic ones.

More on ReactHow to Make API Calls in React With Examples

To recap:

Understanding Reacts approach to data manipulation takes time. I hope my post helps you to become better at React.

View post:

React Props Explained With Examples - Built In

Read More..

Introducing Microsoft Fabric: Will Power BI Be Replaced in 2024? – DataDrivenInvestor

Hey there, Im excited to share my personal journey with a revolutionary analytics platform I recently discovered, Microsoft Fabric.

Microsoft Fabric represents a paradigm shift in the way Power BI users interact with and visualize data. By leveraging advanced technologies and cutting-edge design principles, Microsoft Fabric introduces a new era of intuitive and immersive data experiences.

Microsoft Fabric is an end-to-end analytics solution with full-service capabilities including data movement, data lakes, data engineering, data integration, data science, real-time analytics, and business intelligence all backed by a shared platform providing robust data security, governance, and compliance.

Your organization no longer needs to stitch together individual analytics services from multiple vendors. Instead, use a streamlined solution thats easy to connect, onboard, and operate.

When I first learned about Microsoft Fabric, I was impressed by its promise to reshape how everyone accesses, manages, and acts on data and insights. In the past, it was a challenge to connect every data source and analytics service together now, its all possible on a single, AI-powered platform.

For instance, imagine having a data estate sprawled across different sources and platforms. As a data engineer, its a daunting task to connect and curate data from these different sources.

One of the biggest hurdles in data analysis is managing AI models. But with Microsoft

The rest is here:

Introducing Microsoft Fabric: Will Power BI Be Replaced in 2024? - DataDrivenInvestor

Read More..

The Future of Data Engineering in an AI-Driven Landscape – CXOToday.com

By Jeff Hollan

Jeff Hollan, Director of Product Management, Snowflake, highlights the anticipated developments in 2024 as artificial intelligence becomes integrated into the business operations

Data engineering will evolve and be highly valued in an AI world.

Theres been a lot of chatter that the AI revolution will replace the role of data engineers. Thats not the case, and in fact their data expertise will be more critical than ever just in new and different ways. To keep up with the evolving landscape, data engineers will need to understand how generative AI adds value. The data pipelines built and managed by data engineers will be perhaps the first place to connect with large language models for organizations to unlock value. Data engineers will be the ones who understand how to consume a model and plug it into a data pipeline to automate the extraction of value. They will also be expected to oversee and understand the AI work.

Data scientists will have more fun.

Just as cloud infrastructure forced IT organizations to learn new skill sets by moving from builders of infrastructure and software, to managers of third-party infrastructure and software vendors, data science leaders will have to learn to work with external vendors. It will be an increasingly important skill to be able to pick the right vendors of AI models to engage with, similar to how data scientists today choose which frameworks to use for specific use cases. The data scientist of tomorrow might be responsible for identifying the right vendors of AI models to engage with, determining how to feed the right context into a large language model (LLM), minimizing hallucinations, or prompting LLMs to answer questions correctly through context and formalizing metadata. These are all new and exciting challenges that will keep data scientists engaged and hopefully inspire the next generation to get into the profession.

BI analysts will have to uplevel.

Today, business intelligence analysts generally create and present canned reports. When executives have follow-up questions, the analysts then have to run a new query to generate a supplemental report. In the coming year, executives will expect to interact directly with data summarized in that overview report using natural language. This self-service will free up analysts to work on deeper questions, bringing their own expertise to what the organization really should be analyzing, and ultimately upleveling their role to solve some of the challenges AI cant.

(The author is Jeff Hollan, Director of Product Management, Snowflake, and the views expressed ins this article are his own)

Follow this link:

The Future of Data Engineering in an AI-Driven Landscape - CXOToday.com

Read More..

Infusion of generative AI into analytics a work in progress – TechTarget

The integration between generative AI and analytics remains under development.

Many vendors have unveiled plans to enable customers to query and analyze data using conversational language rather than code or business-specific natural language processing (NLP) capabilities.

In addition, many have also introduced AI assistants that users can ask for help while executing various tasks and tools that automatically summarize and explain data products such as reports, dashboards and models.

Some have also introduced SQL generation features that reduce the coding requirements needed to model data and automated tools that offer suggestions as developers build data products.

Sisense was perhaps the first analytics vendor to reveal plans to integrate its platform with generative AI (GenAI) capabilities, introducing an integration with OpenAI -- developer of ChatGPT -- in January 2023. Two months later, ThoughtSpot unveiled Sage, a tool that combined the vendor's existing natural language search capabilities with large language model (LLM) capabilities to enable conversational language interactions with data.

By summer, Tableau and Qlik were among the vendors that had introduced generative AI plans. In addition, tech giants AWS, Google and Microsoft -- developers of analytics platforms QuickSight, Looker and Power BI, respectively -- were all working to add generative AI to their BI tools.

But as the end of 2023 nears, most analytics vendors' generative AI capabilities are still in some stage of development and have not yet been made generally available.

There are exceptions.

For example, MicroStrategy in October made NLP and text-to-code translation capabilities generally available. Similarly, Domo in August released NLP and AI model management capabilities as part of its Domo AI suite.

Most others, however, are still being refined, according to David Menninger, an analyst at Ventana Research.

"There are some that are available today, but the majority are still in preview," he said.

Perhaps the main reason for the holdup is that it's difficult to take a new technology and make it one of the most significant parts of a platform.

It takes time to get it right, and vendors are attempting to get it right before they release tools to the public, according to Sumeet Arora, ThoughtSpot's chief development officer. Before releasing tools, vendors need to make sure responses to natural language queries are accurate and the data organizations load into analytics tools that are integrating with LLMs remains private and secure.

"The most difficult technical problem is how to leverage GenAI to answer natural language questions with 100% accuracy in the enterprise. That is not a straightforward problem," Arora said.

He noted that OpenAI's GPT-4 and Google's Gemini answer questions with just under 80% accuracy.

"That is not good enough in analytics," Arora said. "The question is how to get to 100% accuracy in analytics. That has been the journey of the last year."

There are two big reasons so many vendors have made generative AI a focal point of product development.

One is the potential to expand BI use within organizations beyond a small group of highly trained users. The other is the possibility of making existing data experts more efficient. Both come down the simplification of previously complex processes, according to Francois Ajenstat, now the chief product officer at digital analytics platform vendor Amplitude after 13 years at Tableau.

"GenAI really drives radical simplicity in the user experience," he said. "It will open up analytics to a wider range of people and it will enable better analysis by doing a lot of the hard work on their behalf so they can get to the insights faster and focus on what matters."

Analytics use has been stuck for more than a decade, according to studies.

Because BI platforms are complicated, requiring coding skills for many tasks and necessitating data literacy training even for low-code/no-code tools aimed at self-service users, only about a quarter of employees within organizations use analytics as a regular part of their job.

Generative AI can change that by enabling true conversational interactions with data.

Many vendors developed their own NLP capabilities in recent years. But those NLP tools had limited vocabularies. They required highly specific business phrasing to understand queries and generate relevant responses.

The generative AI platforms developed by OpenAI, Google, Hugging Face and others have LLMs trained with extensive vocabularies and eliminated at least some of the data literacy training required by previous NLP tools.

Meanwhile, by reducing the need to code, generative AI tools can make trained data workers more efficient.

Developing data pipelines that feed data products takes copious amounts of time-consuming coding. Text-to-code translation capabilities enable data workers to use conversational language to write commands that get translated to code. They greatly reduce time-consuming tasks and free data workers to do other things.

MicroStrategy and AWS are among the vendors that have introduced generative AI tools that enable natural language query and analysis. They also released capabilities that automatically summarize and explain data.

In addition, Qlik with Staige and Domo with Domo AI -- along with a spate of data management vendors -- are among those that have gone further and introduced text-to-code translation capabilities.

Ultimately, one of the key reasons analytics vendors are developing so many generative AI tools is the promise of improved communication, according to Donald Farmer, founder and principal of TreeHive Strategy.

"The most obvious [benefit of generative AI] is that it has an ability to communicate what it finds," Farmer said. "Ninety percent of the problem of analytics is explaining your answers to people or getting people to understand what has been discovered."

Two more benefits -- perhaps geared more toward developers and data engineers -- are its intuitiveness related to data integration and its ability to generate test code to help develop algorithms, Farmer added.

With respect to the generative AI capabilities themselves, some vendors are experimenting a bit more radically than others.

NLP, AI assistants and text-to-code translation capabilities are the most common features that vendors have introduced to simplify data preparation and analysis. Others, however, represent the cutting edge.

Menninger cited Tableau Pulse, a tool that learns Tableau users' behavior to automatically surface relevant insights, as something beyond what most vendors so far have publicly revealed. In addition, he noted that some vendors are working on features that automate tasks such as metadata creation and data cataloging that otherwise take significant time and manual effort.

"The cutting edge is metadata awareness and creation, building a semantic model with little or no intervention by the user," Menninger said. "Take away NLP and the other great value of GenAI is automation. It can automate various steps that have been obstacles to analytics success in the past."

Farmer, meanwhile, named Microsoft Copilot, Amazon Q from AWS and Duet AI from Google as the comprising the current cutting edge.

Unlike analytics specialists whose tools deal only with analyzing data, the tech giants have the advantage of managing an entire data ecosystem and thus gaining deep understanding of a particular business. Their generative AI tools, therefore, are being integrated not only with BI tools but also data management, supply chain management, customer service and other tools.

"The stuff that looks most interesting is the copilot stuff -- the idea of AI as something that is there in everything you do and absolutely pervasive," Farmer said. "It's only the big platform people that can do that."

Often, after tools are unveiled in preview, it takes only a few months for vendors to make whatever alterations are needed and release the tools to the public.

That hasn't been the case with generative AI capabilities.

For example, Sage was introduced by ThoughtSpot nine months ago and is not yet generally available. It was in private preview for a couple of months and then moved to public preview. But the vendor is still working to make sure it's enterprise-ready.

Similarly, Tableau unveiled Tableau GPT and Tableau Pulse in May, but both are still in preview. The same is true of Microsoft's Copilot in Power BI, Google's Duet AI in Looker and Spotfire's Copilot, among many other generative AI tools.

At the core of ensuring generative AI tools are enterprise ready are accuracy, data privacy and data security, as noted by Arora.

AI hallucinations -- incorrect responses -- have been an ongoing problem for LLMs. In addition, their security is suspect, and they have been susceptible to data breaches.

Reducing incorrect responses takes training, which is what vendors are now doing, according to Arora.

He noted that by working with customers using Sage in preview, ThoughtSpot has been able to improve Sage's accuracy to over 95% by combining generative AI with human training.

"What we have figured out is that it takes a human-plus-AI approach to get to accuracy," Arora said. "ThoughtSpot is automatically learning the business language of the organization. But we have made sure that there is human input into how the business language is being interpreted by the data."

A formula that seems to result in the highest level of accuracy from Sage -- up to 98%, according to Arora -- is to first roll the tool out to power users within an organization for a few weeks. Those power users are able to train Sage to some degree so the tool begins to understand the business.

Then, after those few weeks, Sage's use can be expanded to more users.

The most difficult technical problem is how to leverage GenAI to answer natural language questions with 100% accuracy in the enterprise. That is not a straightforward problem. Sumeet AroraChief development officer, ThoughtSpot

"There is no easy button for GenAI," Arora said. "But there is a flow that if you follow, you can get amazing results. Once the system is properly used by the power users and data analysts, it becomes ready for prime time."

But there's more to the lag time between introducing generative AI analytics capabilities and their general availability than just concerns and risks related to accuracy, security and privacy, according to Ajenstat.

Generative AI represents a complete shift for analytics vendors.

It has been less than 13 months since OpenAI released ChatGPT, a significant improvement in generative AI and LLM capabilities.

Before then, some analytics vendors offered traditional AI and machine learning capabilities but not generative AI. Once ChatGPT was released, the ways it could make data workers more efficient while enabling more people within organizations to use analytics tools were clear.

But truly integrating the technologies -- ensuring accurate natural language query responses, training chatbots to be able to assist as customers use various tools, and putting governance measures in place to guarantee data privacy and security -- takes time.

"The speed of adoption of ChatGPT took the technology industry by storm," Ajenstat said. "We realized this is a tectonic plate shifting in the landscape where everyone will lean into it. As a result of that, we're at the peak of the hype cycle. That's exciting. It also means there are a lot of ideas."

Getting the ideas from the planning stage to the production stage, however, is not a simple, quick process, he continued.

In particular, analytics vendors need to make sure users can trust their generative AI tools.

"We see the potential, but can users trust the results?" Ajenstat said. "There's also trust in terms of the training data -- sending it to [a third party]. There's trust in terms of bias and ethics. There's a lot below the surface that technology providers and the industry as a whole have to figure out to make sure we're delivering great products that actually solve customers' problems."

Beyond the difficulty related to getting tools ready for enterprise-level consumption, the speed of generative AI innovation is delaying the release of some capabilities, according to Farmer.

He noted that once a feature is made generally available, a vendor is making a commitment to that feature. They're committing to improve it with updates, offer support to users and so forth. But because generative AI is now evolving so quickly, vendors are unsure whether some of the capabilities they've revealed in preview are the ones they want to commit to long term.

"If you come out with an enterprise product, you're committed to supporting it over the lifetime of an enterprise product," Farmer said. "It's really difficult to say something is stable enough and complete enough and supportable enough to build an enterprise agreement around it."

The natural language query capabilities, AI assistants and summarization tools that many analytics vendors are developing -- and a few have made generally available -- are the first generation of generative AI capabilities.

At a certain point, perhaps during the first half of 2024, most will be ready for widespread use. At that point, vendors will turn their attention to a different set of generative AI features, which will represent a second generation.

Process automation might be a primary theme of that second generation after simplification was the primary theme of the first generation, according to Ajenstat.

"It will be interesting to see how long we stay in the first generation because we haven't actually gotten the mass adoption there," he said. "But for me, the next phase is going to be about automation. It will be using generative AI as an agent that can automate tasks on your behalf, augmenting the human by removing some of the drudgery that's out there."

Arora likewise predicted that task automation will mark the next generation of generative AI, eventually followed by systems themselves becoming autonomous.

Many of the tasks needed to inform natural language queries such as data curation and defining metrics still require manual effort, he noted. But automation of those data preparation tasks is coming.

As for when those automation capabilities will be in production, Arora predicted it could be as early as the second half of 2024 and early 2025.

"Users will be able to connect to a data system and then start asking questions," Arora said. "The system will automatically define metrics, automatically find the right data tables and answer questions with accuracy. There will be complete automation."

Similarly, Menninger cited automation as a likely centerpiece of the next phase of generative AI development. However, he said automation will go beyond merely reducing the drudgery of data preparation.

Menninger expects that the first-generation tools now under development will be available by the end of the first half of next year. Vendors will then not simply turn their attention to automation but turn their attention to the automation of AI.

Generative AI is not the same as AI, Menninger noted. It is not predictive analytics. It still takes the rare expertise of data scientists and PhDs to train data and develop AI models that can do predictive analysis.

However, generative AI can eventually be trained to create AI models.

"In the same way GenAI can now generate SQL code, we're going to get to a point where GenAI can generate predictive models that are of a high enough quality that we can rely on them," Menninger said. "And if they're not of a high enough quality, they'll be close enough that we can at least have more productivity around creating AI models."

He added that research from ISG -- now Ventana's parent company -- shows that the number one problem organizations have developing AI models is a lack of skills.

"To the extent we can use GenAI to overcome that skills gap, that's what the future is about."

Eric Avidon is a senior news writer for TechTarget Editorial and a journalist with more than 25 years of experience. He covers analytics and data management.

Read the rest here:

Infusion of generative AI into analytics a work in progress - TechTarget

Read More..

Computer Science Faculty job with KIMEP University | 37576602 – The Chronicle of Higher Education

KIMEP University

Computer Science Faculty Position Description

KIMEP University invites applications for faculty positions (Assistant/Associate/Full Professor) in Computer Science. The bachelor program in Computer Science is a newly created program that will admit its first students in the 2024-2025 academic year. KIMEP University is the most prestigious and dynamic university in Kazakhstan and Central Asia and has a growing student body. Faculty are expected to teach various courses in the Computer Science program as well as some elective courses for the Business Information Systems program. The appointment will start in Fall 2024 (August, 2024). Responsibilities involve teaching, research, and service.

Qualifications

Applicants must have earned a PhD. In Computer Science or Software Engineering from an accredited institution and/or working as a faculty in such accredited schools, and having demonstrated consistent and good quality of scholarship published in SCI-SSCI and Scopus Q1-Q2 ranked journals. Candidates will be expected to publish in such journals. The teaching load for this position will be 3 courses (9 hours) per academic semester.

KIMEP University

KIMEP University is the leading American style, internationally accredited, English language academic institution. The university provides a world class academic experience and a unique international environment to all its students and faculty. KIMEP was established in 1992 and has built a very strong regional reputation as a leading university in higher education. All academic programs are ranked among the top in Kazakhstan.

Almaty, Kazakhstan

The city of Almaty is a beautiful, modern and vibrant city situated at the base of majestic Tien Shan Mountains in Southeast Kazakhstan. The city has a population of 2 million people and is the financial, cultural, summer and winter sports and cosmopolitan capital of Kazakhstan. Kazakhstan is located in the heart of Eurasia with important commercial inroads bridging Asia and Europe. Kazakhstan's dynamically changing economic, social, educational and cultural environment provides incredible opportunities for significant and original research.

Compensation

Rank and salary are competitive and commensurate with experience and qualifications. Compensation after-tax compares favorably with net salaries in western countries. Combined with a low cost of living, the salary becomes even more competitive in real terms.

Limited on campus housing is available to rent. In addition to salary, a benefits package includes basic healthcare, reduced tuition rates for KIMEP courses, and subsidy for relocation allowance. Summer paid teaching is typically available. The salary will be subject to a deduction of 10% income tax.

Application Process

Please submit the following documents to KIMEP University HR portal: https://hr.kimep.kz/en-US/Home/Vacancy/567

Address any questions to: recruitment@kimep.kz

Closing dates for submission of applications: January 31, 2024

Applications will be evaluated on an ongoing basis and will continue until the position is filled. Only shortlisted candidates will be informed and invited for interviews by BCB search committee.

Original post:

Computer Science Faculty job with KIMEP University | 37576602 - The Chronicle of Higher Education

Read More..

Internationally acclaimed computer science and health analytics expert named Dean of Ontario Tech’s Faculty of … – News

Dr. Carolyn McGregor, incoming Dean, Faculty of Business and Information Technology, Ontario Tech University

Ontario Tech University announces Dr. Carolyn McGregor AM as the new Dean of the Faculty of Business and Information Technology (FBIT), effective Monday, January 1, 2024.

Since moving from Australia to Canada in 2007 to join Ontario Tech as the universitys Canada Research Chair (Health Informatics), Dr. McGregor has become an internationally renowned research leader in Big Data analytics, artificial intelligence (AI), edge (remote location) computing and data mesh (cross-domain) infrastructures.

She is the Ontario Tech Research Chair in Artificial Intelligence (AI) for Health and Wellness, and also the founding co-Director of the Joint Research Centre in Artificial Intelligence for Health and Wellness between Ontario Tech University and the University of Technology Sydney, Australia.

In addition to her academic role as a full Professor, she has held key administrative roles within FBIT, including Interim Dean since July 1, 2023, and previously Associate Dean, Research and Graduate Studies.

Dr. McGregors leading-edge research achievements are highlighted by her international award-winning Artemis and Athena AI platforms for health, wellness, resilience and adaptation in critical care, astronaut health, firefighter training and tactical officer resilience assessment and development. Numerous film and television documentary profiles of her projects and partnerships by producers from around the world have earned major international exposure for her research and for Ontario Tech.

She has more than 200 refereed publications, more than $15 million in research funding, and three patents in multiple jurisdictions. She has deployed her Artemis platform in two hospitals in Ontario, and leads Canadian Department of National Defence research (in collaboration with Ontario Techs ACE Core Research Facility) for new pre-deployment solutions for human-performance in extreme weather. In 2022, she led a research study on the Axiom Ax-1 first all-private astronaut mission, in collaboration with the Canadian Space Agency (CSA) and NASA She also leads the Space Health study, supported by the CSA, on the International Space Station.

She has served on national research grant-selection committees for Canada, France, Germany and the U.K., and is regularly called upon by national and global media to provide insight on the latest technology trends.

Among her many accolades, in 2014 she was named to the Order of Australia (AM) General Division, by Queen Elizabeth II, for her significant service to science and innovation through health-care information systems. In 2017, she was featured in the 150 Stories series commissioned by the Lieutenant Governor of Ontario and the Government of Canada to commemorate Ontarios 150th anniversary. In 2018, she was named one of Digital Health Canadas Women Leaders in Digital Health.

She currently serves as a member of the Institute of Electrical and Electronics Engineers (IEEE) Computer Societys Board of Governors, and a Director on the Board for Compute Ontario.

Prior to joining Ontario Tech, she led the strategic development of the foundational business analytics strategies for one of the largest banks and the largest retail chain in Australia, along with many other large corporations. From these experiences in the business world, she envisioned opportunity to reapply her analytics expertise and AI knowledge to the realm of health care, with an ongoing goal to improve health outcomes for all.

Dr. Carolyn McGregors pioneering research, together with her wealth of leadership experience and international recognition, position her well to lead Ontario Tech Universitys Faculty of Business and Information Technology as it pursues innovation to enhance the well-being of individuals, communities and our planet, and prepares its graduates to make a strong impact in their communities. The universitys senior leadership team thanks Dr. McGregor for her leadership during her interim appointment, and looks forward to working with her as she takes on this new role.- Dr. Lori A. Livingston, Provost and Vice-President, Academic, Ontario Tech University

Ontario Tech Universitys Faculty of Business and Information Technology is renowned for its transformative research that focuses on applying digital technologies for good; its strong industry partnerships; and its innovative undergraduate and graduate academic programs that prepare students to succeed in the workplace. I am thrilled to take on this leadership role and look forward to working with our team of diverse and innovative faculty members as we challenge and inspire students to push their own boundaries of thinking and learning, and build a brighter future together.- Dr. Carolyn McGregor, incoming Dean, Faculty of Business and Information Technology, Ontario Tech University

Follow this link:

Internationally acclaimed computer science and health analytics expert named Dean of Ontario Tech's Faculty of ... - News

Read More..

Busting 3 Myths About Teaching AI in the Classroom – Education Week

The most common mental picture of an artificial intelligence lesson might be this: High schoolers in a computer science class cluster around pricey robotics equipment and laptops, solving complex problems with the help of an expert teacher.

While theres nothing wrong with that scenario, it doesnt have to look that way, educators and experts say. Teaching AI can start as early as kindergarten. Students can learn key AI concepts without plugging in a single device. And AI can be applied to just about any subjecteven basketball practice.

Educators from around the world shared how they have been implementing AI in their classes on a webinar hosted earlier this month by the International Society for Technology in Education, a nonprofit that helps educators make the most of technology.

ISTE has offered professional development allowing educators to explore AI for six years, training some 2,000 educators. The nonprofit also offers sample lessons for students at every grade level that can be applied across a range of subjects.

Heres how educators who went through the training have used it in their classroomsand busted three big myths about teaching AI concepts to K-12 students.

Its never too early to start teaching AI, educators and experts say.

Cameron McKinley, a technology integration coach for Alabamas Hoover City Schools, has taught AI concepts to kindergarteners through 2nd graders. She starts by having students sort cards with pictures of different objects into categories, the same way intelligent machines sort data. Then, she has students use an AI computer program, Quickdrop. The students draw pictures for the technology to interpret.

It can be a good lesson in AIs potential for misunderstanding. For instance, the program asked one student to draw glasses, so she drew something she might drink milk or water out of. The machine, though, was looking for eyeglasses that can improve vision.

It was important that the student not get frustrated, McKinley said. We encourage students to learn from failures of the technology, McKinley said.

You dont need pricey devices to teach AI, educators argue.

Adam Brua, an information technology teacher at Rutland Intermediate School in Vermont, likes working on the unplugged activities ISTE recommends with his 6th grade students. In one activity, students create a graph featuring the characteristics of different animals, showing which animals have fur, four legs, a tail, and/or paws, for instance. That mirrors how machines learn to sort and categorize information.

Its an activity any educator can do, almost anywhere, Brua said. None of this requires expensive equipment or an advanced understanding of AI, Brua said.

But these sorts of tasks still allow students to analyze AIs strengths and weaknesses, Brua said. AI technologies can do certain tasks extremely well, such as image and speech recognition, while other tasks, such as discerning emotions are better left to be done by humans, Brua said.

AI is a technology, sure, but there are ways to integrate it into all kinds of subjects, not just computer science.

For instance, Brandon Taylor, who volunteers as a teacher at Chicago Prep Academy, a school with a focus on student athletes, worked with his basketball player students to create an AI program that could analyze and provide feedback on skills such as shooting, dribbling, and agility through video recordings of students.

And Stacy George, an assistant professor at the University of Hawaii, worked with pre-service teachers on an AI social studies lesson. The budding teachers helped 2nd graders train a teachable machine to distinguish locally grown foods from those that must be flown into the state.

It kept the students engaged, said one pre-service teacher in a video George shared on the webinar. It was something different from what theyre normally used to.

More here:

Busting 3 Myths About Teaching AI in the Classroom - Education Week

Read More..

Modernizing the Internet’s architecture through software-defined networks – Tech Explorist

For about 30 years, the way data moves on the internet hasnt mostly stayed the same. Now, researchers from Cornell and the Open University of the Netherlands are trying to update it. Theyve created a programmable network model that lets researchers and network administrators customize how data moves, giving them more control over the internets air traffic control system. This could make the internet work better and be more adaptable.

When people started working on software-defined networking (SDN), they mainly focused on essential features to control how data moves through the network. However, recent efforts have looked into more advanced features, like packet scheduling and queueing, which impact performance.

One interesting concept is PIFO trees. They provide a flexible and efficient way to program how packets are scheduled. Previous studies have demonstrated that PIFO trees can handle various practical algorithms, including strict priority, weighted fair queueing, and hierarchical schemes. However, we still need a better understanding of the underlying properties and meanings of PIFO trees.

This new research studies PIFO trees from a programming language perspective. In the research paper, the researchers are setting the foundation for the next generation of networking technology. This includes the hardware (physical equipment) and the software (programs running on it). The goal is to create a system that can quickly adapt to different scheduling needs online.

Anshuman Mohan, a doctoral candidate in the field of computer science in the Cornell Ann S. Bowers College of Computing and Information Science said,It takes time to design, test and deploy hardware. Once weve rolled it out, we are financially and environmentally incentivized to keep using that hardware. This is in tension with the ever-changing demands of those who manage networks running on that hardware.

In creating the next generation of networking technology, the research team focused on a crucial component: the network switch. This device, about the size of a small pizza box, plays a vital role in making networks and the internet work.

Switches connect devices to a computer network and manage the flow of data. They are responsible for packet scheduling, which determines how data moves through a network. Imagine the switch as handling packets of data from various usersemails, website visits, or video calls on Zoom. The switchs packet scheduler organizes and prioritizes these data clusters based on rules set by network managers. Finally, the switch sends these packets to other switches until they reach the users device.

However, until now, it has not been easy to customize this air traffic control process. The reason is that scheduling parameters are traditionally baked into the switch by the manufacturer. Now, this rigidity doesnt work.

Mohan said,Our work uses techniques for programming languages to explain how a wide variety of packet scheduling policies options can be realized on a single piece of hardware. The users could reconfigure their scheduling policy every hour if they wanted, and, thanks to our work, find that each of those policies magically fits on the same piece of hardware.

Journal Reference:

Original post:

Modernizing the Internet's architecture through software-defined networks - Tech Explorist

Read More..

What Are We Building, and Why? | TechPolicy.Press – Tech Policy Press

Audio of this conversation is available via your favorite podcast service.

At the end of this year in which the hype around artificial intelligence seemed to increase in volume with each passing week, its worth stepping back and asking whether we need to slow down and put just as much effort into questions about what it is we are building and why.

In todays episode, were going to hear from two researchers at two different points in their careers who spend their days grappling with questions about how we can develop systems and modes of thinking about systems that lead to more just and equitable outcomes, and that preserve our humanity and the planet:

What follows is a lightly edited transcript of the discussion.

Batya Friedman:

I'm Batya Friedman. I'm in the Information School at the University of Washington, a professor there and I co-direct both the value sensitive design lab and also the UW Tech Policy Lab.

Aylin Caliskan:

I am Aylin Caliskan and I'm an assistant professor at the Information School. I am an affiliate of the Tech Policy Lab right now. I am also part of the Responsible AI Systems and Experiences Center, the Natural Language Processing Group, as well as the Value Sensitive Design Lab.

Batya Friedman:

Aylin is also the co-director-elect for the Tech Policy Lab. As I am winding down on my career and stepping away from the university, Aylin is stepping in and will be taking up that pillar.

Justin Hendrix:

And we have a peculiar opportunity during this conversation to essentially talk about that transition, talk a little bit about what you have learned, and also to look at how the field has evolved as you make this transition and into retirement and turn over the reins as it were.

But Dr. Friedman, I want to start with you and just perhaps for my listeners, if they're not familiar with your career and research, just ask you for a few highlights from your career, how your work has influenced the field of AI bias and consideration around design and values and technological systems. What do you consider your most impactful contribution over these last decades?

Batya Friedman:

Well, I think one clear contribution was a piece of work that I did with Helen Nissenbaum back in the mid-nineties. Actually, we probably began in the very early nineties on bias and information computing systems published in 1996, and at that time I think we were probably a little bit all by ourselves working on that. I think that the journal didn't quite know what to do with it at the time, and that's a paper that if you look at the trajectory of its citations, it had a very slow uptake. And then I think as computing systems have spread in society over the last five to seven years, we've seen just an enormous reference back and from another sense of impact of the work I've done, which is not just around bias but around human values more generally and how to account for those in our technical work. Just as one example of evidence of impact and the Microsoft responsible AI work and impact assessments that they published within the last year, they acknowledge heavily drawing on value-sensitive design and its entire framework in the work that they've done.

Justin Hendrix:

I want to ask you just to maybe reflect on that work with Helen Nissenbaum for a moment, and some of the questions that you were asking, what is a biased computer system? Your examples started off with a look at perhaps the way that flight reservation systems work. Can you kind of cast us back to some of the problems that you wanted to explore and the way that you were able to define the problem in this kind of pre-web moment?

Batya Friedman:

Well, we were looking already at that time at ways in which information systems were beginning to diffuse across society and we were beginning to think about which of those were visible to people and which of those were in some sense invisible because they were hidden in the technical code. In the case of airline reservation systems, this has to do with what shows up on the screen. And you can imagine too that algorithms, that technical algorithms where someone is making a technical decision. I have a big database of people who need organs and everyone in the database is stored in alphabetical order. I need to display some of those. And so it's just an easy technical decision to start at the beginning of the list and put those names up on the screen. The challenge comes when you have human beings and the way human beings work is once we find a match, we're kind of done.

So you sure wish in that environment if you needed an organ, your last name started with an A and not a Z. So it's starting to look at that and trying to sort out where are the sources of bias coming from, which are the ones that already pre-exist in society like redlining, which we're simply embedding into the technology. We're almost bringing them over, which of them are coming from just making good technical choices without taking context into account. But then once you embed that in a social environment, bias may emerge. And then also starting to think about systems that given the environment they were developed for, may do a reasonable job managing bias that's never perfect. But then when you use them with a very different population, a very different context, different cultural assumptions, then you see what emerges or bias. And so at that time we identified these three broad sources for bias and systems. So pre-existing social bias, technical bias from just technical decisions, and then this category of emergent bias. And those categories have stood the test of time. So that was way back in the mid-nineties and I think they're still quite relevant and quite helpful to people working say in generative AI systems today.

Justin Hendrix:

That perhaps offers me the opportunity to ask Dr. Caliskan a question about your work, and maybe it's a compound question, which is to describe the work that you've been doing around AI bias and some of the work you've done looking specifically at translation engines. How do you see the frameworks, the ideas that come from Dr. Friedman's work sort of informing the research you're doing today and where do you see it going in future?

Aylin Caliskan:

This is a great question. In 2015, '16 was frequently using translation systems, statistical machine translation systems, and I kept noticing bias translation patterns. For example, one of my native languages is Turkish, and Turkish is a gender-neutral language. There is a pronoun all meaning he, she, it or they. And my other native language is Bulgarian and it's a grammatically gendered language, more gendered than English, and it has the Cyrillic alphabet. So I would frequently use translation to text my family, for example, in Bulgaria. And when I was translating sentences such as O bir doktor and O bir hemire meaning he or she is a doctor, he or she is a nurse. The outcomes from translation systems were consistently, he's a doctor, she's a nurse. And then we wanted to understand what is happening with natural language processing systems that are trained on large-scale language corpora and why they are exhibiting bias in decision-making processes such as machine translation generating outputs.

And we couldn't find any related work or any empirical studies except Batya's work from 1996 bias in computer systems. And then we decided to look into this in greater detail as especially language technology started becoming very widely used since their performance was improving. Even all the developments we have in artificial intelligence computing and information systems and then studying the representations in the language domain, which you can think of as natural language processing models and the way they perceive the world, the way they perceive language. I've found out that perception is biased when it comes to certain concepts or certain social groups. For example, certain names that might be more representative of underrepresented groups or historically disadvantaged groups were closer in the representational space to more disadvantaging words versus historically dominant groups representation or words that are related to them were closer in the representational space mathematically to more positive words.

And then we developed a principled and generalizable method to empirically study bias in computer systems to find out that large-scale language corpora are a source of implicit biases that have been documented in social cognition in society for decades. And these systems that are trained on large-scale sociocultural data, embed the biases that are inhuman produced data reflecting systemic inequities, historically disadvantaging data and biases related to the categories Batya mentioned. And over years, we have shown that this generalizes to artificial intelligence systems that are trained on large-scale sociocultural data because large-scale sociocultural data is a reflection of society, which is not perfect. And AI systems learn these reflections in imperfect ways, adding their own, for example, emergent biases and associations as well. And since then I have been focusing on this topic, and it is a great coincidence that the person that contributed foundational work in this area is at the same school as I am.

Justin Hendrix:

Dr. Friedman, when you think about this trajectory of the initial foundational work that you were doing close to 30 years ago and the work that's just been described, do you think about this sort of trajectory of where we've got to in both our understanding of these issues and perhaps also our societal response or the industry response or even the policy response? Do you think we've really made progress? I mean, this question around bias and AI systems bias and technological systems generally it's better understood, but to some extent, I don't have exact numbers on this, it seems like a bigger problem today perhaps than it's ever been. Is that just a sort of function of the growth of the role of technology in so many aspects of society?

Batya Friedman:

Well, one, at a certain point just has to say yes to that, right? Because if we had, as was predicted many years ago, only five computers in the world and it wasn't used in very many sectors of society, then I don't think we would be as concerned. So certainly the pervasive, widespread, pervasive uptake is part of the motivation behind the concern. I think an interesting question here to ask is in what ways can we hold ourselves accountable, both as technologists and also as members of society and governments and private sector for being alert and checking these issues as they emerge? So for example, I talked about a situation where you have a database, everybody's in alphabetical order, and then you display that data in alphabetical order on a screen that can only list 20 names at a time. We know that's problematic. And initially, we didn't know that was problematic.

So if you did that say 30 years ago, there would be unfortunate biases that would result and it was problematic. But now that we know that's a problem, I would say any engineer who builds a system in that way should be held accountable, that would be actually negligent. And this is how we have worked in engineering for a long time, which is as we develop systems, as we gain experience with our methods and techniques, what we consider to be a best practice changes. And the same is true say for building reliable systems or correct systems. We can't build a reliable or correct system, a fully reliable or correct system yet we still hold that out as an ideal. And we have methods that we hold ourselves accountable to. And then if we have failures, we look to see if those methods were used and if they were, then we try to mitigate the harms but we don't cry negligence. And I think the same things can apply here.

So then to your question, I would say we have a lot less experience at this moment in time with understanding what are the methods that can really help us identify these biases early on and in what ways do we need to remain alert? How can we diagnose these things? How can we mitigate them as they unfold? We know that many of these things we will be able to identify or see in advance. So some we can, but other things are going to unfold as people take up these systems. And so we know that our processes also need to engage with systems as they're being deployed in society. And so that's a real... In some ways, a shift in terms of how we, at least with computational systems, how we think about our responsibilities towards them. If I were talking about building bridges, you would say, oh yes, of course you need a maintenance plan. You need people examining the bridge once a year to see if there are new cracks, mitigating those cracks when they happen. So we know how to do this as engineers with other kinds of materials. We're less experienced with digital materials.

Justin Hendrix:

We are though seeing a sort of industry pop up around some of these ideas, folks who are running consultancies, building technological tools, et cetera, to deliver on the ability to investigate various systems for bias. Some of that's being driven by laws. I'm sitting in New York City, there's a law around bias and automated employment decisions that's recently come into effect for instance, what do you make of that? What do you think of the kind of, I suppose, commercialization of some of your ideas?

Batya Friedman:

Let's go back to building bridges or I spent a lot of time actually in the Bay Area, so I'm familiar with earthquakes. And thinking by analogy, if we can build a building, if I use the very best techniques we know and it can withstand a 6.0 Earthquake, and we have a 3.0 Earthquake and the building collapses, then I'm going to be looking at, well, what were the processes that were used? And I'm going to make that cry of negligence and I have a sense of standards and the people who are doing the work are going to be held accountable. If on the other hand it was a 9.0 earthquake, and we actually don't know how to build for that, we're going to consider that a tragedy, but we aren't going to say that the engineers did a poor job. So I think one of the first things we need to be able to do is take an honest look at where we are with respect to our methods and techniques and best practices.

I think we're at the very beginning. Like anything, we will only get good at something if we work really hard at it. So I think that we need to be investing a lot of our resources in how to develop better methods for identifying or anticipating biases early on and techniques for having ways in which those can be reported and ways in which they can then be mitigated. And those techniques and processes need to take into account not just those people who are technically savvy and have access to technology, but to recognize that people who may never even put their hands on the technology might be significantly affected by biases in the system and that they need ways that are perhaps non-technical ways of communicating harms that they're experiencing and then having those be taken up seriously and addressed. So I would say that we're at the very beginning of learning how to do that.

I would also observe that many of the resources are coming from certain places and those places and people who are making those decisions have certain interests. And so can we impartially look at those things so that we take a more broader swath of the stakeholders who are impacted so that when we start to identify where and how the stakeholders need to be accounted for and where and how the resources are being allocated to ensure we develop methods that will account for these stakeholders that there's something even-handed happening there. So a lot of this is about process and a lot of the process that I've just sketched is fundamental to value-sensitive design, which is really how do we foreground these values and use them to drive the way in which we do our engineering work, or in this case we view policy as a form of technology. So one moves forward on the technical aspects and the policy regulatory aspects as a whole, and that broadens your design space. So to your question, I would say we're at the very beginning and a really critical question to ask is those forces that are moving forward, are they themselves representing too narrow a slice of society and how might we broaden there? How do we do that first assessment? And then how might we broaden that in an even-handed manner?

Justin Hendrix:

Dr. Caliskan, can I ask you, as you think about some of the things that are in the head lines today, some of the technologies that are super hot at the moment, large language models, generative AI more broadly, is there anything inherent perhaps in those technologies that makes looking for bias or even having some of these considerations any more difficult? There's lots of talk about the challenges of explainability, the black box nature of some of these models, the lack of transparency and training data, all the sorts of problems that would seem to make it more difficult to be certain that we're following the kinds of best practices that Dr. Friedman just discussed.

Aylin Caliskan:

This is a great question. Dr. Friedman just mentioned that we are trying to understand the landscape of risks and harms here. These are new technologies that became recently very popular. They've reached the public recently, although they have been developed for decades now. And in the past we have been looking at more traditional use cases based on, for example, decision-making systems. For example, in college admissions, resume screening, employment decisions or representational harms that manifest directly in the outputs of AI systems. But right now, the most widely used generative AI systems are typically offered by just a few companies. And they have the data about what these systems are being used. And since many of them are considered general-purpose AI systems, people might be using them for all kinds of purposes to automate mundane tasks to collaborate with AI. However, we do not have information about these use cases and such information might be proprietary and it might benefit with market decisions in certain cases, but without understanding how exactly these generative AI systems are being used by millions if not billions of people, we cannot trivially evaluate potential harms and risks.

We need to understand the landscape better so that we can develop evaluation methods to measure these systems that are not transparent, that are not easy to interpret. And once we understand how society is co-evolving with these systems, we can develop methods not just to measure things and evaluate potential harms, but also think about better ways to mitigate these problems that are socio-technical where technical solutions by themselves are not sufficient, and we need regulatory approaches in this space as well as raising public awareness as Dr. Friedman mentioned. Stakeholders, users, how can they understand how these systems might be impacting them when they are using them for trivial tests? What kinds of short-term and long-term harms might they experience? So we need a holistic approach to understand where these systems are deployed, how they are being used, how to measure them, and what can be done to understand and mitigate the harms.

Batya Friedman:

So I'd like to pick up on one of the things that you mentioned there, which is that there are very large systems, language systems that are being used for all kinds of mundane tasks. And I'd just like to think about that for a minute, have us think together about that. So I'm imagining a system, all kinds of things in my life, this system is now becoming the basis in which I am engaging in things. It begins to structure the language that I use. It not only structures language, but it structures in certain ways, thought. And I want to contrast that with a view of human flourishing where the depth and variety of human experience, the richness of human experience is the kind of world that we want to live in, where there are all kinds of different ways of thinking about things, cultural ways, language poetic ways, different kinds of expressions.

Even what Aylin was talking about in the beginning, she grows up speaking Turkish and Bulgarian and now English, right? Think of her ability for expression across those languages. That's something I'll never experience. So I think another question that we might ask separate from the bias question perhaps related, but separate has to do with a certain kind of homogenization as these technologies pervade so much of society and even cross national international boundaries embedded in them are ways of thinking and what happens when over time. And inter-generationally, you think of young people coming of age in these technologies and absorbing almost in the background, in their ocean behind them, a very similar way of thinking and being in the world. What are the other things that are beginning to disappear? And I wonder if there isn't a certain kind of impoverishment about our collective experience as human beings on the planet that can result from that.

And so I think that's a very serious concern that I have. And beyond that specific concern, what I want to point out is that arriving at that concern comes from a certain kind of, I would say principled systemic way of thinking about what does it mean if we take this technology seriously and think of it at scale in terms of uptake and over longer periods of time, what might those implications be? And then if we could agree on a certain notion of human flourishing that would allow for this kind of diversity of thought, then that might really change how we wanted to use and disseminate this kind of technology or integrate it into our everyday lives. And we might want to make a different set of decisions now then the set of decisions that seem to be unfolding.

Justin Hendrix:

I think that's a fairly bald critique of some of the language we're hearing from Silicon Valley entrepreneurs who are suggesting that AI is the path to abundance, that is the path to some form of flourishing that seems to be mostly about economic prosperity. Do you think of your ideas as sort of standing in opposition perhaps to some of the things that we're hearing from those Silicon Valley leaders?

Batya Friedman:

I guess, what I would step back and say is what are the things that are really important to us in our lives? If we think about that societally, we think about that from different cultural perspectives. What are the things that we might identify? And then to ask the question, how can we use this technology to help us realize those values that really matter to us? And I would also add to that thinking about the planet. Our planet is quite astonishing, right? It is both finite and regenerative to the extent that we don't destroy the aspects that allow for regeneration. And so I think another question we can also ask about this technology, it depends on data, right? And where does data come from? Well, data comes from measurement, right? Of something, somehow. Well, how do we get measurement? Well, somehow we have to have some kind of sensors or some kind of instrumentation or something such that we can measure things, such that we can collect those things all together and store them somewhere, such that we can operate on them with some kinds of algorithms that we've developed.

Okay, so you can see where this is going, which is if you take that at scale, there's actually a huge amount of physical infrastructure that supports the kind of computation we're talking about for the kind of artificial intelligence people are talking about now. So while on the one hand we may think about AI as something that exists in the cloud, and the cloud is this kind of ephemeral thing. In fact, what the cloud really is, is a huge number of servers that are sitting somewhere and generating a lot of heat, so need to be cooled, often cooled with water, often built with lots of cables, using lots of extractive minerals, etc, etc. And not only that, but the technology itself deteriorates and some needs to be replaced at a certain number of years, whether it's five years or 10 years or 25 years. When you think about doing this at scale, the magnitude of that is enormous.

So the environmental impact of this kind of technology is huge. And so we can ask ourselves, well, how sustainable, how wise is that of a choice to build our society based on these kinds of technologies that require that kind of relationship to materials? And by materials I mean the physical materials, the energy, the water, all of that. So when I step back and I think about the flourishing of our society and technologies, tools and technologies and infrastructure that can support that over time for myself, I'm looking for technologies that make sense on a finite and regenerative planet with the population scales that we have right now, right? We could shrink the population and that would change a lot of things as well. Those are the kinds of questions. So what I would say about many of the people making decisions around artificial intelligence right now is that I don't think they're asking those questions, at least seriously and in a way in which it would cause them to rethink how they are building and implementing those technologies.

So there are explorations. There are explorations about putting data centers at the bottom of the ocean because it's natural cooling down there. There are explorations around trying to improve, say, battery storage or energy storage. But the question is, do we invest and build a society that is dependent on these technologies before we've actually solved those issues, right? And just by analogy, think about nuclear power. When I was an undergraduate, there were discussions, nuclear power plants were being built, and the question of nuclear waste had not been solved. And the nuclear engineers I talked to at the time said, "Well, we've just landed on the moon. We're going to be able to solve that problem too in 10 years. Don't worry about it." Well, here it is, how many years later, decades later, and we still have nuclear waste sitting in the ground that will be around for enormous periods of time.

That's very dangerous, right? So how do we not make that same kind of mistake with computing technologies? We don't need to throw the baby out with the bathwater, but we can ask ourselves if this direction is a direction more like nuclear waste around nuclear power, or if there is an alternative way to go, and what would that be? And could we be having public conversation at this level? And could we hold our technical leaders, both the leaders of the companies, the CEOs and our technologists accountable to these kind of criteria as well? And that I think would be a really constructive way for us to be moving at this moment in time.

Justin Hendrix:

So just in the last few days, we've seen the EU agree apparently some final version of its AI Acts, which will eventually become law depending on makes it through last checks and process there. We've seen the executive order from the Biden administration around artificial intelligence. We're seeing a slew of policies emerge across states in the US which are more likely perhaps to become law than anything in the US Congress. What do you make right now of whether the policy response to artificial intelligence in particular is sufficient to the types of problems and challenges that you're describing? And I might ask you both that question, but Dr. Caliskan for you, how do you think about the role of the lab in engaging with these questions going forward in these next few years?

Aylin Caliskan:

We are at the initial stages of figuring out goals and standards moving forward with these powerful technologies. And we have many questions. In some cases, we do not exactly even know what the questions are, but the technology is already out there. It has been developed and deployed, and it is currently having impact on individuals and society at scale. So regulation is behind and accordingly nowadays, we see a lot of work interest and demand in this area to start understanding the questions and find some answers. But given that the technology is being deployed in all kinds of socio-technical contexts, understanding the impact and nuance in each domain sector will take time. Although the technology is still evolving very rapidly and proliferating in all kinds of application scenarios, it is great that there is some progress and there is more focus on this topic in society, in the regulatory space and in academia, in the sciences as well.

But it's moving very rapidly. So rapidly that we are not able to necessarily catch the problems on time to come up with solutions, and the problems are rapidly growing. So how can we ensure that when developing and deploying these systems, we have more robust standards and checkpoints before these systems are released and impact individuals, make decisions that change life's outcomes and opportunities? Is there a way to slow down so that we can have higher quality work in this space to identify and potentially come up with solutions to these problems? And I would also like to note that yes, the developments from the EU or the executive order are great, but even when we try to scratch the surface to find some solutions, they will not be permanent solutions. These are socio-technical systems that evolve with society, and we will have to keep dealing with any side effects dynamically on an ongoing basis. Similar to the analogy Dr. Friedman just made about bridges and their annual maintenance. You will need to keep looking into what kinds of problems and benefits might emerge from these systems. How can we amplify the benefits and figure out ways to mitigate the problems while understanding that these systems are impacting everyone and the earth with great scale?

Justin Hendrix:

That maybe gives me an opportunity to ask you, Dr. Friedman, a question about problem definition. So there's been a lot of discussion here about what are the right questions to ask? What are the ways that we can understand the problems and how best to address them? Close to 30 years on these questions, research career essentially about developing frameworks to put these questions into, what have you learned about problem definition and what do you hope to sort of pass along during this transition as you sort of pass the baton here?

Batya Friedman:

So I would just say that my work in value sensitive design is fundamentally about that. And we deploy human values as what is important to people in their lives, especially things with moral and ethical import. And that definition has worked well for us over time. And along with that, we've developed methods and processes. So I think of the work that we've done in a way, there's the adage about you can give a man a fish or you can teach him how to fish and he'll feed himself for the rest of his life, or I suppose you could give that to any person and they will be able to do that. I think that the work that we've been involved in is thinking about, well, what does that fishing rod look like? Are there flies and what are those flies about? What are the methods for casting that makes sense and how can you pass along those methods? And also there's knowledge of the river and knowledge of the fish, and knowledge of the insects and the environment and taking all of that into account and also knowing that if you over fish the river, then there won't be fish next year.

And there may be people upstream who are doing something and people downstream who are doing something, and if you care about them, want to ensure that your fishing is also not going to disrupt the things that are important to them in their lives. So you need to bring them into the conversation. And so I would say what my contribution has been, has been to frame things such that one understands the roles of the tools and technologies, those fishing rods, those flies, the methods, the also understanding of the environment and how to tap into the environment and the knowledge there and to broaden and have tools, other kinds of tools for understanding and engaging with other stakeholders who might be impacted in these systemic things. So my hope is that I can pass that set of knowledge, tools, practices to Aylin, who will in her lifetime encounter the new technologies, whatever those happen to be as they unfold, that she will not be starting from scratch and having to try and figure out for herself how to design a good fishing rod.

She can start with the ones that we've featured out and she's going to need refinements on that, and she's going to decide that there are places where the methods don't yet do a good enough job. And there's other things that have happened. Maybe there was a massive flash flood and that's changed the banks and the river, and there's something different about the environment to understand, but I hope she's not starting from scratch, but could take those things, extend and build them and has the broader ethos of the exploration as a way to approach these problems. So that's what I hope I'm passing and trust that she will take up and make good wise choices with it. I think that's all we can do, right? We're all in this together through the long term.

Aylin Caliskan:

I am very grateful that I have this opportunity to learn from colleagues that have been deeply thinking about these issues for decades when no one even had an idea about these topics that are so important for our society. And in this process, I am learning, and this is an evolving process, but I feel very grateful that I have the opportunity to work with the tech policy led community, including Batya, Ryan, Yoshi, who have been so caring, thoughtful, and humane in their research, always incorporating and prioritizing human values and providing a strong foundation, a good fishing rod to tackle these problems. And I am very excited that we will continue to collaborate on these topics. It is very exciting, but at the same time, it is challenging because these impacts come with great responsibility and I look forward to doing our best given our limited time and resources and figure out ways to also pass these methodologies, these understandings, these foundational values to our tech policy community and future generations as they will be the ones that will have these fishing rods to focus on these issues in the upcoming decades and centuries.

Batya Friedman:

I wanted to pick up on something also that Aylin had said, not in this comment, but the comment before about slowing things down. And I just wanted to make another observation for us. Historically, when people have developed new tools and technologies, they have been primarily of a physical sort, though I think something like writing in the printing press are a little bit different, but they take a certain amount of time to actually produce things, it takes a certain amount of time for things to be disseminated. And during that time, people have a chance to think and a chance to come to have a better understanding of what a wise set of decisions might be. We don't always make wise decisions, but at least we have some time and human thought seems to take time, right? Ultimately, we all have our brains and they operate at a certain kind of speed and a certain kind of way.

I think one of the things we can observe about our digital technologies and the way in which we have implemented them now is that if I have a really good idea this afternoon and I have the right set of technical skills, I can sit down and I can build something and by 7:00 in the morning, I can release that and I can release and broadcast that basically across the globe. And others who might encounter it, if the stars align, might pick that up. And by 5 o'clock in the evening, twenty-four hours from when I had the first idea, maybe this thing that I've built is being used in many places by hundreds, if not thousands or tens of thousands, hundreds of thousands of people, right? Where is there time for wisdom in that? Where is there time for making wise decisions? So I think in this moment we have a temporal mismatch, shall we say, between our capacity as human beings to make wise choices, to understand perhaps the moral and ethical implications of one set of choices versus another, and the speed at which new ideas can be implemented, disseminated, and taken up at scale.

And that is a very unique challenge I think, of this moment. And so thinking about that really carefully and strategically, I think would be hugely important. So without other very good ideas, one thing one might say is, well, what can we do to speed up our own abilities to think wisely? That might be one kind of strategy. Another strategy might be, well, can we slow this part down, the dissemination part down if we can't manage to make ourselves go more quickly here in terms of our own understandings of wisdom, but at least getting the clarity of that structural issue on the table and very visible, I think is also helpful. And from a regulatory point of view, I think understanding that is probably also pretty important. Usually when people say you're slowing down a technology, that's seen as quite a negative thing, I think it's squashing innovation. But I think when you understand that we are structurally in a different place and we don't have a lot of experience yet, maybe that's some additional argument for trying to use regulation to slow things in a substantial way. And what heuristics we might use, I don't know, but I think that is really worth attending to.

Justin Hendrix:

Well, I know that my listeners are also on the same journey that you've described of trying to think through these issues and trying to imagine solutions that perhaps, well maybe fit the bill of wisdom or a flourishing society, certainly a democratic and equitable and more just society.

I want to thank both of you for all of the work that you've done to advance these ideas, both the work that's come before and the work that's to come from both of you and from UW and the Tech Policy Lab more generally. I thank you both for talking to me today. Thank you so much.

Batya Friedman:

Thank you.

Aylin Caliskan:

Thank you.

Read more from the original source:

What Are We Building, and Why? | TechPolicy.Press - Tech Policy Press

Read More..