Page 327«..1020..326327328329..340350..»

Adding Temporal Resiliency to Data Science Applications | by Rohit Pandey | Mar, 2024 – Towards Data Science

Image by midjourney

Modern applications almost exclusively store their state in databases and also read any state they require to perform their tasks from databases. Well concern ourselves with adding resilience to the processes of reading from and writing to these databases, making them highly reliable.

The obvious way to do this is to improve the quality of the hardware and software comprising the database so our reads and writes never fail. But this becomes a law of diminishing returns where once were already at high availabilities, pouring more money in moves the needle only marginally. Adding redundancy to achieve high availability quickly becomes a much better strategy.

So, what does this high reliability via adding redundancy to the architecture look like? We remove single points of failure by spending more money on redundant systems. For example, maintaining redundant copies of the data so that if one copy gets corrupted or damaged, the others can be used to repair. Another example is having a redundant database which can be read from and written to when the primary one is unavailable. Well call these kinds of solutions where additional memory, disk space, hardware or other physical resources are allotted to ensure high availability spatial redundancy. But can we get high reliability (going beyond the characteristics of the underlying databases and other components) without spending any additional money? Thats where the idea of temporal redundancy comes in.

All images in this article unless otherwise specified are by the author.

If spatial redundancy is running with redundant infrastructure, then temporal redundancy is running more with existing infrastructure.

Temporal redundancy is typically much cheaper than spatial redundancy. It can also be easier to implement.

The idea is that when reliability compromising events happen to our applications and databases, they tend to be restricted to certain windows in time. If the

Read more from the original source:

Adding Temporal Resiliency to Data Science Applications | by Rohit Pandey | Mar, 2024 - Towards Data Science

Read More..

FrugalGPT and Reducing LLM Operating Costs | by Matthew Gunton | Mar, 2024 – Towards Data Science

There are multiple ways to determine the cost of running a LLM (electricity use, compute cost, etc.), however, if you use a third-party LLM (a LLM-as-a-service) they typically charge you based on the tokens you use. Different vendors (OpenAI, Anthropic, Cohere, etc.) have different ways of counting the tokens, but for the sake of simplicity, well consider the cost to be based on the number of tokens processed by the LLM.

The most important part of this framework is the idea that different models cost different amounts. The authors of the paper conveniently assembled the below table highlighting the difference in cost, and the difference between them is significant. For example, AI21s output tokens cost an order of magnitude more than GPT-4s does in this table!

As a part of cost optimization we always need to figure out a way to optimize the answer quality while minimizing the cost. Typically, higher cost models are often higher performing models, able to give higher quality answers than lower cost ones. The general relationship can be seen in the below graph, with Frugal GPTs performance overlaid on top in red.

Using the vast cost difference between models, the researchers FrugalGPT system relies on a cascade of LLMs to give the user an answer. Put simply, the user query begins with the cheapest LLM, and if the answer is good enough, then it is returned. However, if the answer is not good enough, then the query is passed along to the next cheapest LLM.

The researchers used the following logic: if a less expensive model answers a question incorrectly, then it is likely that a more expensive model will give the answer correctly. Thus, to minimize costs the chain is ordered from least expensive to most expensive, assuming that quality goes up as you get more expensive.

This setup relies on reliably determining when an answer is good enough and when it isnt. To solve for this, the authors created a DistilBERT model that would take the question and answer then assign a score to the answer. As the DistilBERT model is exponentially smaller than the other models in the sequence, the cost to run it is almost negligible compared to the others.

One might naturally ask, if quality is most important, why not just query the best LLM and work on ways to reduce the cost of running the best LLM?

When this paper came out GPT-4 was the best LLM they found, yet GPT-4 did not always give a better answer than the FrugalGPT system! (Eagle-eyed readers will see this as part of the cost vs performance graph from before) The authors speculate that just as the most capable person doesnt always give the right answer, the most complex model wont either. Thus, by having the answer go through a filtering process with DistilBERT, you are removing any answers that arent up to par and increasing the odds of a good answer.

Consequently, this system not only reduces your costs but can also increase quality more so than just using the best LLM!

The results of this paper are fascinating to consider. For me, it raises questions about how we can go even further with cost savings without having to invest in further model optimization.

One such possibility is to cache all model answers in a vector database and then do a similarity search to determine if the answer in the cache works before starting the LLM cascade. This would significantly reduce costs by replacing a costly LLM operation with a comparatively less expensive query and similarity operation.

Additionally, it makes you wonder if outdated models can still be worth cost-optimizing, as if you can reduce their cost per token, they can still create value on the LLM cascade. Similarly, the key question here is at what point do you get diminishing returns by adding new LLMs onto the chain.

Read more:

FrugalGPT and Reducing LLM Operating Costs | by Matthew Gunton | Mar, 2024 - Towards Data Science

Read More..

Claude 3 vs ChatGPT: Here is How to Find the Best in Data Science – DataDrivenInvestor

Claude 3 vs ChatGPT: The Ultimate AI & Data Science Duel Created with Abidin Dino AI, to reach it, consider being Paid subscriber to LearnAIWithMe, here

The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.

Edsger W. Dijkstra

Echoing Edsgers insight, we go into capabilities of two LLMs, contrasting their prowess in the arena of Data Science.

Here are the prompts well use to compare ;

Claude 3, developed by ex-OpenAI employees and supported by a $2 billion investment from Google in October, has quickly gained fame for its exceptional reasoning abilities.

Here you can use it : https://claude.ai/login?returnTo=%2F But lets first see how this compares with well-known LLMs.

Here are technical comparison of Clude 3 & GPT and Gemini(Old bard, googles LLM)

You can see the image below, which compares Claude 3 Sonet(Free), Claude 3 Opus, and Haiku(the Paid version of Claude 3) with Gemini 1.0 Ultra(the free version) and Gemini 1.0 Pro(the Paid version).

Here is the original post:

Claude 3 vs ChatGPT: Here is How to Find the Best in Data Science - DataDrivenInvestor

Read More..

DragGAN: Everything you need to know about this AI – DataScientest

Like other popular tools such as ChatGPT or MidJourney and Stable Diffusion, DragGAN exploits generative artificial intelligence technology to automate creative tasks.

In this case, its photo editing that becomes childs play, since the AI seems almost to guess the users intention and make the changes for him or her.

More traditional and long-established software such as Photoshop have no choice but to embrace innovation, or risk becoming obsolete. Indeed, Adobe has already launched its own Firefly AI to bring its tools into the new era.

Over the next few years, advances in artificial intelligence will continue to open up new possibilities in image editing. These include automatic object recognition, real-time retouching and video editing.

Despite DragGANs ease of use, exploiting its full potential requires a thorough understanding of artificial intelligence.

Human supervision is needed to improve the quality of the results produced by the AI, which can still make mistakes. To acquire this expertise, you can choose DataScientest.

Our training courses enable you to learn all the techniques and tools required to work in the Data Science profession, as an analyst, data scientist or data engineer.

In particular, youll learn about Machine Learning and Deep Learning, neural networks, GANs, and specialized tools like Keras, TensorFlow or PyTorch. This will enable you to understand how software like DragGAN works, and even create your own models!

As you progress through the other modules of our training courses, youll also become an expert in data analysis, business intelligence, dataviz, programming and databases.

By the end of the course, youll have acquired all the skills you need to become a Data Science professional. Youll also receive a state-recognized diploma and certification from our cloud partners AWS or Azure.

All our training courses are entirely distance learning via the web, and are eligible for funding options. Dont waste another moment and discover DataScientest!

Read more here:

DragGAN: Everything you need to know about this AI - DataScientest

Read More..

WiDS Livermore Conference: Attendees Share Research Insights – Mirage News

Lawrence Livermore National Laboratory (LLNL) recently hosted its 7th annual Women in Data Science (WiDS) conference for data scientists, industry professionals, recent graduates and others interested in the field. As an independent satellite of the global WiDS conference celebrating International Women's Day, the Livermore hybrid event was held to highlight the work and careers of LLNL and regional data-science professionals.

Hosted at the University of California Livermore Collaboration Center, the all-day event included technical talks, panel discussions, speed mentoring, a poster session and networking opportunities. Keynote speaker and LLNL Distinguished Member of Technical Staff Carol Woodward spoke about her unconventional career path. She described her experience as one of few women in male-dominated classes at Louisiana State University, where she earned her bachelor's degree in mathematics, and how she discovered the field of applied mathematics after nearly becoming a microbiologist. Throughout her talk, Woodward gave credit to those who mentored her at every step of her journey.

"[With] the power of the right cohort and engaged mentors, the right environment - it's amazing what you can accomplish," she said.

Read this article:

WiDS Livermore Conference: Attendees Share Research Insights - Mirage News

Read More..

Vanderbilt to establish a college dedicated to computing, AI and data science – Vanderbilt University News

Vanderbilt has begun work to establish a transformative college dedicated to computer science, AI, data science and related fields, university leaders announced today. In addition to meeting the growing demand for degrees in technological fields and advancing research in rapidly evolving, computing-related disciplines, the new, interdisciplinary college will collaborate with all of Vanderbilts schools and colleges to advance breakthrough discoveries and strengthen computing education through a computing for all approach.

The College of Connected Computing will be led by a new dean, who will report to Provost and Vice Chancellor for Academic Affairs C. Cybele Raver and to School of Engineering Dean Krishnendu Krish Roy. The search for the colleges dean is scheduled to begin in late August, and recruiting of faculty will begin in the coming months. It will be the first new college at Vanderbilt since the university and the Blair School of Music merged in 1981.

Of all the factors shaping society, few are more influential than the rapid emergence of advanced computing, AI and data science, Chancellor Daniel Diermeier said. To continue to carry out our mission, prepare all our students for their careers and advance research across the university, Vanderbilt must contribute even more to the study, understanding and innovative application of these fast-changing disciplines. Our aim is to make Vanderbilt a global leader in these fields, ensuring our continued academic excellence and capacity for world-changing innovation.

Our new college will enable us to build upon our strong programs and catapult Vanderbilt to the forefront of breakthrough discovery and innovationin key areas of computer science and also in a wide range of other disciplines that capitalize on advanced computational methods. In launching this new college, we will provide students with highest-caliber educational opportunities at the intersection of these pathbreaking fields, Raver said. The creation of this college represents a tremendous win and will be transformative for our entire university community.

Raver noted the ways that Vanderbilt is forging a bold and distinct strategic path to address burgeoning research and educational opportunities, including increasing demand for expertise in computing-related fields. Moreover, she said, the global interest in AI aligns perfectly with Vanderbilts leading work in that field. She said a dedicated college will enable Vanderbilt to keep making groundbreaking discoveries at the intersections of computing and other disciplines and will more effectively leverage advanced computing to address some of societys most pressing challenges.

The establishment of this interdisciplinary, cross-cutting college is a watershed momentnot only for the School of Engineering, but also for the entire university, Roy said. The future of education, research and thinking in all disciplines is now inherently tied to, and will be greatly influenced by, the knowledge and power of computing. The idea of computing for all is fundamental to the future of learning.

Many of the specific details about the collegeincluding its departments, degree programs and research infrastructurewill be informed by the recommendations of a task force on connected computing composed of faculty from across the university. In addition, Vice Provost for Research and Innovation Padma Raghavan will launch a Computing Catalyst working group that will engage faculty and staff leaders in computing from across campus and solicit their input on strategically expanding the universitys computing resources. The decision to establish this new college is rooted in conversations with faculty, Raver said. We are continuing that faculty engagement with this working group, and were fortunate to have the advice of some of the best minds in these fields as we embark on this exciting journey.

The members of the Connected Computing Task Force include:

Krishnendu Roy, Chair Bruce and Bridgitt Evans Dean of Engineering University Distinguished Professor of Biomedical Engineering; Pathology, Microbiology and Immunology; and Chemical and Biomolecular Engineering

Douglas Adams Vice Dean of the Schools of Engineering Daniel F. Flowers Chair Distinguished Professor of Civil and Environmental Engineering Professor of Mechanical Engineering Faculty Affiliate, VINSE

Hiba Baroud Associate Chair and Associate Professor of Civil and Environmental Engineering James and Alice B. Clark Foundation Faculty Fellow Associate Professor of Computer Science Faculty Affiliate, VECTOR, Data Science Institute

Gautam Biswas Cornelius Vanderbilt Professor of Computer Science and Computer Engineering Professor of Engineering Management Senior Research Scientist, ISIS Faculty Affiliate, Data Science Institute

Erin Calipari Associate Professor of Pharmacology Associate Professor of Molecular Physiology & Biophysics Associate Professor of Psychiatry & Behavioral Sciences Director, Vanderbilt Center for Addiction Research Faculty Affiliate, Vanderbilt Brain Institute

Laurie Cutting Patricia and Rodes Hart Professor and Professor of Special Education Professor of Psychology Professor of Pediatrics Professor of Electrical and Computer Engineering Professor of Radiology & Radiological Sciences Associate Provost in the Office of the Vice Provost of Research and Innovation Associate Director of the Vanderbilt Kennedy Center Faculty Affiliate, Vanderbilt Brain Institute

Benoit Dawant Cornelius Vanderbilt Professor of Electrical Engineering Incoming Chair of the Department of Electrical and Computer Engineering Director and Steering Committee Chair, Vanderbilt Institute for Surgery & Engineering Professor of Biomedical Engineering Professor of Computer Science

Abhishek Dubey Associate Professor of Computer Science Associate Professor of Electrical and Computer Engineering Director, SCOPE lab at ISIS Faculty Affiliate, Institute for Software Integrated Systems and Data Science Institute

Bennett Landman Stevenson Professor of Electrical and Computer Engineering and Chair of the Department of Electrical and Computer Engineering Professor of Biomedical Engineering Professor of Computer Science Professor of Neurology Associate Professor of Biomedical Informatics Associate Professor of Psychiatry and Behavioral Sciences Associate Professor of Radiology and Radiological Sciences Faculty Affiliate, Vanderbilt Institute for Surgery and Engineering (VISE), Vanderbilt Brain Institute, Vanderbilt Kennedy Center, Vanderbilt University Institute of Image Science (VUIIS), Data Science Institute

Michael Matheny Professor of Biomedical Informatics Professor of Biostatics Professor of Medicine Director, Center for Improving the Publics Health Through Informatics

Sandeep Neema Professor of Computer Science Professor of Electrical and Computer Engineering Chair of the Executive Council, Institute for Software Integrated Systems

Ipek Oguz Assistant Professor of Computer Science Assistant Professor of Biomedical Engineering Assistant Professor of Electrical & Computer Engineering Faculty Affiliate, Vanderbilt Institute for Surgery and Engineering (VISE)

J.B. Ruhl David Daniels Allen Distinguished Chair of Law Director, Program in Law and Innovation Co-Director, Energy, Environment and Land Use Program Faculty Affiliate, Data Science Institute

Jesse Spencer-Smith Professor of the Practice of Computer Science Adjunct Professor of Psychology Interim Director and Chief Data Scientist, Data Science Institute

Jonathan Sprinkle Professor of Computer Science Professor of Electrical & Computer Engineering Professor of Civil & Environmental Engineering Faculty Affiliate, Institute for Software Integrated Systems

Yuankai Kenny Tao Associate Professor of Biomedical Engineering Associate Professor of Ophthalmology & Visual Sciences SPIE Faculty Fellow in Engineering Faculty Affiliate, Vanderbilt Institute for Surgery & Engineering

Holly Tucker Mellon Foundation Chair in the Humanities Professor of French Director, Robert Penn Warren Center for the Humanities

Kalman Varga Vice Chair of the Department of Physics & Astronomy Professor of Physics Director, Minor in Scientific Computing Faculty Affiliate, VINSE

Steven Wernke Chair of the Department of Anthropology Associate Professor of Anthropology Director, Vanderbilt Initiative for Interdisciplinary Geospatial Research Faculty Affiliate, Data Science Institute

Jules White Professor of Computer Science Associate Professor of Biomedical Informatics Senior Advisor to the Chancellor for Generative AI in Education and Enterprise Solutions Faculty Affiliate, Institute for Software Integrated Systems, Data Science Institute

Dan Work Director of Graduate Studies in Civil Engineering Professor of Civil & Environmental Engineering Professor of Computer Science Faculty Affiliate, VECTOR, Institute for Software Integrated Systems, Data Science Institute

Tracey George ex officio Vice Provost for Faculty Affairs and Professional Education Charles B. Cox III and Lucy D. Cox Family Chair in Law and Liberty Professor of Law

Tiffiny Tung Ex officio Vice Provost for Undergraduate Education Gertrude Conaway Vanderbilt Chair in the Social and Natural Sciences Professor of Anthropology

Members of the Vanderbilt community can learn more about this initiative and share feedback with the faculty working group by visiting vanderbilt.edu/about/computingtaskforce.

The rest is here:

Vanderbilt to establish a college dedicated to computing, AI and data science - Vanderbilt University News

Read More..

High School Coders Excel in Annual EECS Programming Contest – University of Arkansas Newswire

Austin Cook

From left to right: Deven Nguyen, Jai Gandhi and Nicholas Robinson, members of the winning Asian Sensations team from Rogers High School.

The annual High School Programming Contest, hosted by the Electrical Engineering and Computer Science Department, took place on March 9. This year's contest had more than 80 participants and 130 attendeesfrom 10 schools.

The challenges of the contest were designed for a speed-based programming competition. Each problem required parsing input, processing dataand producing output according to specific rules or conditions. Each submission is judged by various members of the EECS faculty.

The High School Programming Contest offers an opportunity for high schoolers in the state to test their coding knowledge while fostering teamwork in a lively, competitive setting. Through this initiative, contestants engage in hands-on coding exercises, honing their skills while collaborating with fellow teammates.

Nicholas Robinson, a student from Rogers High School and a member of the first-place team, the Asian Sensations, said, "I thought the event was fun; the problems were fun," he remarked. "There were some interesting problems. The contest was a lot about speed, so we got lucky with how the time penalties worked out. But overall, I was happy with the competition. I thought we did well. I enjoyed the problems."

Jeff Anderson, the coach of the Asian Sensations and a teacher at Rogers High School, said, "I'm not surprised they won. But I'm very proud of them. They worked hard this past year. They competed in the state competition and this competition last year. They've been working hard all year to make this happen," Anderson added. "They're all amazing. They're all amazing kids, and they have bright futures ahead of them."

Robinson said, "It felt good to win. We really didn't do as well as we wanted to last year. And we had a good run this year. So, I was happy with our performance. It was better than we expected this year." Robison added, "I would just encourage anybody to come do these competitions. I think it's really fun. Whether it's in-person ones like this or online ones, it's always just fun to solve problems and hang out with people who are awesome and enjoy coding."

At the end of the competition, four trophies were presented, with Dean Kim Needy presenting the first-place trophy, and various prizes were handed out to participants.

First Place Team: The Asian Sensations Members: Deven Nguyen, Nicholas Robinson, Jai Gandhi High School: Rogers High School

Second Place Team: Three Fire Emojis Members: Ellie Feng, Thomas Coolidge, Hudson Ledbetter High School: Conway High School

Third Place Team: The Rubber Duckies Members: Xave Kapity, Ivan Freeman High School: Haas Hall Academy Rogers

Most Creative Team: 3 Tiny Whales Members: Christopher Ramirez-Lazaro, Patrick Jiang, Maddox Sutton High School: Fayetteville High School

Originally posted here:

High School Coders Excel in Annual EECS Programming Contest - University of Arkansas Newswire

Read More..

MIT scientists have just worked out how to make the most popular AI image generators 30 times faster – Livescience.com

Popular artificial intelligence (AI) powered image generators can run up to 30 times faster thanks to a technique that condenses an entire 100-stage process into one step, new research shows.

Scientists have devised a technique called "distribution matching distillation" (DMD) that teaches new AI models to mimic established image generators, known as diffusion models, such as DALLE 3, Midjourney and Stable Diffusion.

This framework results in smaller and leaner AI models that can generate images much more quickly while retaining the same quality of the final image. The scientists detailed their findings in a study uploaded Dec. 5, 2023, to the preprint server arXiv.

"Our work is a novel method that accelerates current diffusion models such as Stable Diffusion and DALLE-3 by 30 times," study co-lead author Tianwei Yin, a doctoral student in electrical engineering and computer science at MIT, said in a statement. "This advancement not only significantly reduces computational time but also retains, if not surpasses, the quality of the generated visual content.

Diffusion models generate images via a multi-stage process. Using images with descriptive text captions and other metadata as the training data, the AI is trained to better understand the context and meaning behind the images so it can respond to text prompts accurately.

Related: New AI image generator is 8 times faster than OpenAI's best tool and can run on cheap computers

In practice, these models work by taking a random image and encoding it with a field of random noise so it is destroyed, explained AI scientist Jay Alammar in a blog post. This is called "forward diffusion," and is a key step in the training process. Next, the image undergoes up to 100 steps to clear up the noise, known as "reverse diffusion" to produce a clear image based on the text prompt.

Get the worlds most fascinating discoveries delivered straight to your inbox.

By applying their new framework to a new model and cutting these "reverse diffusion" steps down to one the scientists cut the average time it took to generate an image. In one test, their model slashed the image-generation time from approximately 2,590 milliseconds (or 2.59 seconds) using Stable Diffusion v1.5 to 90 ms 28.8 times faster.

DMD has two components that work together to reduce the number of iterations required of the model before it spits out a usable image. The first, called "regression loss," organizes images based on similarity during training, which makes the AI learn faster. The second is called "distribution matching loss," which means the odds of depicting, say, an apple with a bite taken out of it corresponds with how often you're likely to encounter one in the real world. Together these techniques minimize how outlandish the images generated by the new AI model will look.

"Decreasing the number of iterations has been the Holy Grail in diffusion models since their inception," co-lead author Fredo Durand, professor of electrical engineering and computer science at MIT, said in the statement. "We are very excited to finally enable single-step image generation, which will dramatically reduce compute costs and accelerate the process."

The new approach dramatically reduces the computational power required to generate images because only one step is required as opposed to "the hundred steps of iterative refinement" in original diffusion models, Yin said. The model can also offer advantages in industries where lightning-fast and efficient generation is crucial, the scientists said, leading to much quicker content creation.

See original here:

MIT scientists have just worked out how to make the most popular AI image generators 30 times faster - Livescience.com

Read More..

SLU, TGI Researcher Part of Team Using Remote Sensing to Study Permafrost : SLU – Saint Louis University

ST. LOUIS Saint Louis University is one of five universities working together to study permafrost using hyperspectral remote sensing, as part of a grant funded by the Department of Defense (DoD) as part of its Multidisciplinary University Research Initiative (MURI) program.

Vasit Sagan, Ph.D. is a professor of geospatial science and computer science, associate vice president for geospatial science at Saint Louis University and chief scientist for food security and digital agriculture for Taylor Geospatial Institute (TGI). Photo by Sarah Conroy.

Vasit Sagan, Ph.D., professor of geospatial science and computer science, associate vice president for geospatial science at Saint Louis University and chief scientist for food security and digital agriculture for Taylor Geospatial Institute (TGI), is SLUs principal investigator on the project.

The project, Interdisciplinary Material Science for the Hyperspectral Remote Sensing of Permafrost (IM SHARP), will explore the physical and chemical properties of permafrost using remote sensing. The permafrost properties will be reviewed under current and potential environmental conditions.

The DoD awarded the highly competitive five-year, $7.5 million overall MURI grants to 30 teams at 73 academic institutions earlier this month after the Army Research Office, Air Force Office of Scientific Research, and Office of Naval Research solicited proposals in areas of strategic importance to the Department.

The multidisciplinary IM SHARP research team is led by Tugce Baser, Ph.D., assistant professor of geotechnical engineering at the University of Illinois and a TGI associate. The team also includes Go Iwahana of the International Arctic Research Center at the University of Alaska Fairbanks; Michael Lanagan, The Pennsylvania State University; Joel Johnson, Ohio State University; and Sahin Ozdemir, The Pennsylvania State University.

The team will explore the fundamental physical, chemical, electromagnetic, thermodynamic, hydraulic and mechanical properties of permafrost under current and changing environmental conditions that govern the remote sensing of permafrost at various wavelengths.

The project seeks to understand hyperspectral fingerprints of permafrost material chemistry and its dynamics in the context of climate change. To do this, the team will use simulations, remote sensing from multiple scales (drones, crewed aircraft, and satellite imaging), light polarization, and electromagnetic (EM) theory guided by knowledge of permafrost physical processes.

SLU will receive $1.3 million to study hyperspectral signatures and light polarization associated with the physical, chemical, electromagnetic, thermodynamic properties of permafrost under current and future climate conditions.

Specifically, Sagan will lead hyperspectral data collection at permafrost test sites; scan simulated permafrost samples created in the lab with various what if scenarios with benchtop scanning systems, and develop novel spectral algorithms for characterizing permafrost from multiple scales, wavelengths, and polarizations.

Since launching in 1985, DODs MURI program has allowed teams of investigators from multiple disciplines to generate collective insights, facilitating the growth of cutting-edge technologies to address unique challenges for the Department of Defense.

Permafrost plays a pivotal role in regulating Earths climate and offers a living laboratory to accurately characterize the rate and magnitude of a warming climate, Sagan said. This is truly an interdisciplinary science team representing expertise in remote sensing, material chemistry, theoretical modeling, physics, and geotechnical engineering, uniquely positioned to lead this project.

Founded in 1818, Saint Louis University is one of the nations oldest and most prestigious Catholic institutions. Rooted in Jesuit values and its pioneering history as the first university west of the Mississippi River, SLU offers more than 15,200 students a rigorous, transformative education of the whole person. At the core of the Universitys diverse community of scholars is SLUs service-focused mission, which challenges and prepares students to make the world a better, more just place. For more information, visit slu.edu.

TGI is passionate about fueling geospatial science and technology to create the next generation of solutions and policies that the whole world will depend on for sustainability and growth.

The TGI consortium is led by Saint Louis University and includes the Donald Danforth Plant Science Center, Harris-Stowe State University, University of Illinois Urbana-Champaign, Missouri University of Science & Technology, University of Missouri-Columbia, University of Missouri-St. Louis, and Washington University in St. Louis. Collectively, these institutions encompass more than 5,000 faculty and 100,000 students.

For more information, visit taylorgeospatial.org.

Continued here:

SLU, TGI Researcher Part of Team Using Remote Sensing to Study Permafrost : SLU - Saint Louis University

Read More..

Is a Computer Science Degree Worth It? – Southern New Hampshire University

If working with software, technology and a systems mindset interests you, computer science can be a great fit. Its a field that offers many opportunities to work in cutting-edge technology and can lead to a variety of rewarding career paths.

Computer science is a diverse field grounded in technology, combining elements of project planning, software development, data analysis and more, said Dr. Gary Savard, an associate dean of computer science at Southern New Hampshire University (SNHU).

In addition to work at SNHU, Savard has extensive experience working in the computer science field in other ways. He served in the United States Air Force as an officer, both active and reserve, for more than 30 years. He also worked as a software engineer for many companies and owned a software company himself.

His experience in the field ranges from classified Department of Defense projects to maintenance workflow software, artificial intelligence, medical imaging, large-scale database systems, web development and many other types of software development.

At SNHU, Savard oversees the team responsible for computer science course development and management, among many other responsibilities with both faculty and students.

Computer science is highly in demand across all types of industry, Savard said.* In fact, the field is enjoying exponential growth, both with traditional companies and with cutting-edge start-ups, he said.*

Earning a degree in computer science demonstrates your ability to work in a team as well as your aptitude for learning new technological skills and programming languages. You will also gain a lot of experience with hands-on learning and collaboration, according to Nick LeBoeuf '23.

Since earning his bachelor's degree in computer science from SNHU, LeBoeuf has put his own technological skills to work at his job in web development. To be successful in this role, he needs strong design skills coupled with the ability to put himself in the end-user's shoes.

LeBoeuf enjoys working in a profession that challenges him to keep his skills sharp. What I love most about the computer science field is that you are constantly learning, he said. Technology is ever-evolving, and in computer science, we are (on) the front lines of this ever-changing field, trying to ... adapt our existing applications to new standards.

Any degree can be hard if its the wrong fit. While computer science is no doubt a challenging major for many due to its highly technical and mathematical nature, its a field that can be very rewarding for the right person, said Savard.

It takes some time to develop the skills required (to be successful), but grit and persistence pays off, he said.

As a recent graduate, LeBoeuf said, I do think computer science (may) require more effort than other degrees ... but if you put in that effort and really enjoy what you do, it doesnt seem hard."

Several skills that can be helpful for success in the computer science field, per Savard, are:

There are likely some individual classes you might not want to take, just like with any degree program, but these classes may help you later on in your schooling and career.

For LeBoeuf, "Data Structures and Algorithms" was a challenge. When I was taking the class my sophomore year, it was definitely not my favorite class ... but I stuck with it because I knew it was important, he said.

Two years later, LeBoeuf was able to apply what he learned in this class by serving as a Lead Peer Educator at SNHU for the computer science program. Through this role, he was able to teach other computer science majors the material and help them along in their own schooling. Today, working in the field as a front-end developer, LeBoeuf continues to apply the concepts he learned in that class every day.

The U.S. Bureau of Labor Statistics (BLS) shows positive job outlooks for a number of professions suitable for people with a bachelor's degree in computer science.* These professions include:

Median incomes for these jobs range from $80,730 for web developers and digital designers to $126,900 for computer network architects, BLS reported.* Job outlooks for each are predicted to increase between 4% (the national average for job growth) for computer network architects and by as much as 32% for information security analysts over the next 10 years, according to BLS.*

According to BLS, you may engage in the following types of work, depending on your specific career choice:

While many computer science jobs require only a bachelor's degree to get started, if you go on to earn a master's degree, you may have even more career opportunities (SNHU does not currently offer a master's degree in computer science).

Working as a computer and information research scientist in software, research and development and computer systems design tends to be among the higher-earning computer science careers, as reported by BLS.* There are also many opportunities to work in the federal government, including the military, as well as academia. While these latter roles may not be as lucrative as more technological jobs, they still pay between $84,440 and $115,400, according to BLS.*

Understandably, it may sound as though artificial intelligence, commonly known as AI, could take over the industry and result in computer scientists losing their jobs.

Its important to remember that AI was originally developed by computer scientists. Because of this, Savard said he feels confident that computer science as a discipline isnt going anywhere. Instead, "AI will help us to progress more quickly in developing new technologies as well as automate some of the more tedious tasks that can consume part of our day, he said.

LeBoeuf agrees that AI is a good thing. People think that AI is going to take jobs, he said. (But) you still need that human aspect to every single job to make sure AI is producing what it's meant to (produce).

After all, computer scientists are the ones who implement AI into websites and applications for people to use, LeBoeuf said.

AI can revolutionize various industries by improving efficiency and decision-making, he said. Through the tons of data you give it, (AI) also might discover new patterns or insights that humans might overlook because of the amount of data (they have before them).

Everyone has their own motivation for choosing a careerfield. If you have an interest in one or more of the following areas, you may find computer science a good path for you, said Savard:

Savard recognizes the unique skill set of computer scientists. He said that the ability to do things that seem like magic to those not in the field is very rewarding. Working first in the military and now in academia, he enjoys being able to put his skills to use educating others.

LeBoeuf's work is with a civil engineering firm. He enjoys the public involvement aspect of the field in particular.

Putting yourself in the users shoes, and understanding where they would look for certain items on a website," is important and useful, LeBoeuf said.

The quickly expanding nature of the computer science field and the many avenues for learning and applying your skills are top benefits to a career in computer science.

Taking advantage of opportunities for collaboration and learning while in school can help prepare you for the rewarding computer science career of your choice.

*Cited job growth projections may not reflect local and/or short-term economic or job conditions and do not guarantee actual job growth. Actual salaries and/or earning potential may be the result of a combination of factors including, but not limited to: years of experience, industry of employment, geographic location, and worker skill.

A former higher education administrator, Dr. Marie Morganelliis a career educator and writer. She has taught and tutored composition, literature, and writing at all levels from middle school through graduate school. With two graduate degrees in English language and literature, her focus whether teaching or writing is in helping to raise the voices of others through the power of storytelling. Connect with her on LinkedIn.

View original post here:

Is a Computer Science Degree Worth It? - Southern New Hampshire University

Read More..