Page 477«..1020..476477478479..490500..»

After Gaining 80% in 2023, Is Amazon Still a Buy? – The Motley Fool

After a stock has skyrocketed, it's fair to take a step back and ask whether that particular player still makes a good buy. You may be thinking that this kind of momentum can't last forever and the stock is looking pretty expensive now. Often, you could be right.

But some stocks, even after enormous gains, have what it takes to keep the good times rolling. If you avoid them because they increased a lot in value, you may miss out on a very valuable investment opportunity.

All of this means it's important to look at every stock market winner on a case-by-case basis before deciding whether to buy, sell, hold, or avoid. Considering all of this, let's take a look at e-commerce and cloud computing giant Amazon (AMZN -0.39%), a stock that delivered an 80% gain last year. Is this consumer goods and technology giant still a buy?

Image source: Getty Images.

Amazon stock was on fire last year, following a difficult 2022. The company reported its first annual loss in almost a decade after facing a variety of challenges -- from the pressure of higher interest rates to overcapacity across its fulfillment network. The tough times prompted Amazon to improve its cost structure, something that quickly set the company on the road to recovery.

Amazon cut tens of thousands of jobs, improved efficiency across the company, and shifted investment to high-growth areas -- such as technology infrastructure to support its cloud computing business, Amazon Web Services (AWS). The company also switched its U.S. fulfillment model to a regional one from a national one, making delivery cheaper and faster.

As a result, Amazon reported quarterly profits last year and progressively improved other financial metrics. By the third quarter, the company announced a double-digit increase in sales to $143 billion, net income more than tripled, and free cash flow advanced to an inflow of more than $21 billion from an outflow a year earlier.

Importantly, the changes Amazon made to its cost structure aren't just moves to help the company during difficult times. Instead, these steps should help Amazon gain customers, keep costs in check, and grow its business well into the future -- and all of this could equal earnings and share-price growth during better economic times, too.

For example, by making key inventory items available in eight regions around the country, Amazon is able to lower its cost to serve. By ensuring speedy delivery, the company is increasing its chances customers will keep coming back. That scores two wins for Amazon and should help the company maintain its dominance in the high-growth e-commerce market.

Another example is Amazon's investments in artificial intelligence (AI). The company uses AI for its own purposes, such as optimizing warehouse operations. It also offers AWS customers platforms so they can launch their own generative AI projects -- without having to do any heavy lifting.

This is thanks to services like Amazon Bedrock, a platform that gives companies access to top foundation models they can customize to suit their needs. AWS already is the world's No. 1 cloud computing services provider, and its prioritization of AI should help it keep that spot.

All of this points to Amazon having what it takes to gain over time. And analyst forecasts for more than 79% growth per year over the next five years reinforces that.

But could the stock possibly climb in the near term, too? Amazon has only just started reaping the rewards of its cost-structure improvements and other efforts.

In the most recent earnings report, CEO Andy Jassy said, "We have a long way before being out of ideas to improve cost and speed" of delivery. And the company, from e-commerce to cloud computing, also should benefit as economic pressures lift.

This means that right now, Amazon is trading at 43x forward earnings estimates, about half of its valuation by this measure a couple of years ago. This looks reasonably priced. And that's why this top technology stock still is a great buy right now, even after soaring last year.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Adria Cimino has positions in Amazon. The Motley Fool has positions in and recommends Amazon. The Motley Fool has a disclosure policy.

Read this article:
After Gaining 80% in 2023, Is Amazon Still a Buy? - The Motley Fool

Read More..

How to run Windows 365 on an iPad – TechTarget

Windows 365 presents a unique opportunity for organizations to deliver a Windows environment to a variety of devices.

End users can access a virtual desktop -- or Cloud PC as Microsoft calls it -- from any device with an internet connection with Windows 365. One of the most important benefits of Windows 365 is its ability to get up and running quickly with most of the infrastructure running in the cloud for users of all different device types.

Windows running on an Apple device may seem like an unlikely pairing, but more businesses are incorporating iPads into their workflows. As such, it may make sense to have access to essential business software on these devices, especially software that may not have a mobile or iPadOS counterpart or requires intranet resources and a Windows environment.

One of the primary advantages of Windows 365 is the versatility it provides to employees who need to work remotely or while on the move. With an iPad, they can access their Windows environment, apps and files from virtually anywhere, making it more convenient for them to remain connected and productive.

Additionally, there are some scenarios where Windows Cloud PC on an iPad could provide value to businesses. For example, if an organization provides iPads to its employees for remote work purposes, in addition to their company Windows laptop, accessing Windows applications enables a consistent and seamless experience while remote and on the go. Organizations that use iPads for client-facing presentations or demonstrations could also benefit from access to a full Windows OS. This technology could provide a broader range of capabilities and flexibility, while maintaining high trust and security of data.

Organizations that use iPads for client-facing presentations or demonstrations could also benefit from access to a full Windows OS.

Another advantage is cost savings. Instead of purchasing and maintaining separate devices for each employee, organizations can provide iPads with Windows 365 installed, reducing hardware costs and streamlining device management. An excellent use case for this is employees using their own devices. Instead of purchasing, provisioning, managing and supporting multiple devices for an employee, organizations can allocate a Cloud PC that an employee can use on their own personal or company-managed iPad.

As experienced IT administrators may expect, there are some limitations that come with running Windows 365 on an iPad.

One of the biggest concerns is the cost-to-performance ratio. While configuring and deploying Windows 365 Cloud PC, organizations can choose the amount of CPU, RAM and storage according to their needs.

However, the higher the performance required, the more expensive the monthly cost. For basic applications, such as Word and other productivity tools, a more essential configuration may suffice for most organizations. But, for those who require a Cloud PC to handle more complex tasks, the monthly costs may accumulate quickly and become untenable.

Cloud PCs come with the drawback of always requiring an internet connection. If a user needs to work in areas with inconsistent Wi-Fi connectivity, this is a bad fit. Organizations could consider buying an iPad with cellular capabilities and adding a cellular carrier plan to keep the device connected. This, however, can add to the organization's budget and overall technology cost.

Before setting up Windows 365 on iPad, there are a few things organizations need to ensure are in place, including the following.

For optimal performance using Windows 365, a stable and reliable internet connection is required. Some organizations may consider using an iPad with cellular data for the best experience.

Windows 365 is available in four plans with per-user licenses: Business, Enterprise, Frontline and Government. While Microsoft provides a comprehensive guide to its own licensing, a brief summary of the different licenses can provide enough info to get customers started:

To begin the setup process, admins can go here to choose which plan and configurations are best for their organizations.

From there, admins can remotely manage Windows 365 Cloud PCs through the Microsoft 365 admin center or here for the Business plan. They can use Intune with integrations extending to Entra ID and Microsoft Defender for the Enterprise and above plans. Each option supports various remote management actions, configurations, security settings and policy assignments. For Windows 365 Enterprise users, organizations also can upload and use custom images for their Cloud PCs.

Users can view and access their available Cloud PCs directly in Safari browser by signing into this site using their company credentials. They can then select to run the Cloud PC directly within the browser.

Users can download the Remote Desktop Mobile application from the Apple App Store. Additionally, for devices managed through mobile device management, such as Microsoft Intune, administrators can distribute the Remote Desktop Mobile application as a managed app to streamline application installation and availability.

Once the application is installed, users can follow this process:

Organizations that don't want to deploy Windows 365 have another option to facilitate Windows applications and resources on iPads. At Microsoft Ignite 2023, Microsoft announced the launch of Windows App -- a portal from which users can access numerous Microsoft applications, services and even instances of Windows OSes. The app, which is in public preview at the time this publishes, offers a customizable home screen that enables users to access a Windows 365 Cloud PC, Azure Virtual Desktop or Microsoft Dev Box from anywhere. The app also supports multiple monitor and display resolutions and device redirection for peripherals such as webcams, audio devices, store devices and printers.

Michael Goad is a freelance writer and solutions architect with experience handling mobility in an enterprise setting.

See original here:
How to run Windows 365 on an iPad - TechTarget

Read More..

Shifting Paradigms: Cloud Security in the Post-Pandemic Era – Check Point Blog – Check Point Blog

The new normal. It began as a buzzword after COVID-19 hit the world in 2020. The new normal referred to a culture that adapted to the pandemic and changed our day-to-day lives. One significant outcome of the lockdown was digital transformation through hasty cloud adoption.

Although cloud computing was already making steady inroads into the mainstream business world, the pandemic hit the accelerator hard, making a cloud-first strategy necessary. A recent survey noted that cloud computing will grow to $832.1 billion in market size by 2025 from a mere $371.4 bn in 2020. Cloud technology has benefited companies in multiple ways, including improved performance, better productivity, lower operational costs, and increased agility.

However, the downside to cloud adoption is its vulnerability against security threats. During the pandemic, more than 80% of global organizations reported that cyber threats have increased, along with 79% who experienced operational disruption due to a cyberattack. The vulnerability of cloud applications is primarily due to frail security considerations and the unprecedented migration speed to cloud environments.

Companies opened up to remote working to ensure business continuity during the lock-down. Given the promptness needed to shift to this new work model, employees started using unmanaged devices to access corporate environments away from security defenses. However, this created blindspots for security teams to sanitize and secure; employees using their devices on unsecured internet networks expanded the attack surface and compromised the corporate security posture. This led to the prevalence of cyberattacks, costing companies sensitive data, money, and reputation.

Businesses had to optimize their cloud cost to modernize IT infrastructure. According to the Flexera 2022 State of the Cloud Report, companies overshoot their budget by 13% on average. However, slashing budget often leads to poor planning and security gaps, with organizations trading security for productivity especially considering 80% do not have a dedicated cloud security team.

We need to learn from historic security incidents like data breaches at Equifax. Hackers used stolen credentials to access Equifaxs system and secure the social security numbers of 145 million US citizens. A simple security best practice couldve saved the company millions of dollars in fines, damning litigations, and public battering thats hard to recover from.

Zero Trust Approach

Given the security ambiguity of unmanaged devices in the remote work setup, the zero trust approach is an ideal strategy to improve your security stance. Zero trust typically means considering every attempt to access the corporate network a threat, therefore enforcing security-centric operations within the network. For example, you can establish rules to authenticate devices and users across the network. Even if hackers gain access to credentials, multiple checkpoints will alert the security team. You can also draw up a plan to implement least privilege access to restrict users from accessing applications they dont need.

Secure Access Service Edge (SASE)

To secure multi-cloud environments efficiently, you must adopt the Secure Access Service Edge (SASE) model. Gartner predicts that over 40% of organizations will have a SASE strategy by 2040, up from less than 1% in 2018. It is an advanced network architecture that combines VPN, SD-WAN, and Zero Trust Network Access.

1. Data breaches: The shift to the cloud was considered a more efficient and secure way of managing infrastructure and information. However, the last few years have proved the opposite. With the inevitable cloud-first future, companies will concentrate their efforts and investments on securing their stakes in the cloud.

2. Stricter regulations: What began as litigations into cyberattacks could morph into more stringent policies and regulations to shield ordinary people from the threats. We already have quite a few mandates in place to secure confidential data. Governments are also expected to make it illegal for companies to capture a few specific user details to improve user privacy and reduce exposure.

3. Smart contracts: Companies could consider using blockchain-based contracts to define SLAs and business arrangements to automate actions in case of exceptions. Integrating the cloud with blockchain technology can open up several new avenues concerning security and overall operations.

4. Quantum computing: Companies like Microsoft, Google, IBM, and Intel are working on building a quantum computing reality in collaboration with governments. These systems will push cybersecurity into a new realm with quantum-resistant cryptography. Before it becomes a reality, we can expect quantum-safe algorithms to protect our data from quantum computing attacks.

The paradigm shift concerning cybersecurity is the need of the hour, and it has been that way for some time now. Companies need to reset their security networks, bringing remote and mobile devices within the purview of their defenses. Instead of being firefighters, every company should be able to predict the possibilities and threats to maneuver their security strategy to align with the current scenarios.

From code to cloud, Check Point CloudGuards CNAPP unifies cloud security, merging deeper security insights to prioritize risks and prevent critical attacks providing more context, actionable security and smarter prevention. CloudGuard enhances visibility by enriching context, provides actionable remediation insights and speeds up threat mitigation across diverse cloud teams.

If you would like to see CloudGuard in action, please schedule a demo, and a cloud security expert will help to understand your needs.

If you have any other questions, please contact your local Check Point account representative or channel partner using the contact us link.

See the rest here:
Shifting Paradigms: Cloud Security in the Post-Pandemic Era - Check Point Blog - Check Point Blog

Read More..

Computer science major applied to 456 internships, got three offers – Business Insider

Angle down icon An icon in the shape of an angle pointing down. "I started applying in July, and soon I hit 200 applications," Oliver Wu, a computer science major at the University of Michigan, told Newsweek. alvarez via Getty Images

Computer science major Oliver Wu says he pulled out all the stops in his quest for a summer internship.

"456 applications, 56 interviews, and 0 sleep in 4 months, all for 1 internship," the University of Michigan junior wrote in a TikTok post published on January 11.

Media not supported by AMP.Tap for full mobile experience.

Wu, whose post has been viewed more than 2.9 million times, told Newsweek that he started applying even before classes resumed in the fall.

"I started applying in July, and soon I hit 200 applications," Wu said, adding that he was making 15 to 20 applications a day before the school term began.

Wu told Newsweek that he did feel "burned out" from the long-drawn search, which saw him receive "hundreds of rejections."

"The hardest part was staying positive and working hard," Wu said.

But Wu remained unfazed and pressed on with his search.

"I did not want to feel regret that I could have tried harder, so I made up my mind to pursue this with everything I had," Wu told the outlet.

Wu's efforts eventually paid off. He secured three offers and will join Ford as an enterprise technology intern this summer.

"I was in class at the time, and I remember stepping out, going into the hallway, and jumping up and down while silently screaming in excitement for around 10 minutes," Wu recounted to Newsweek.

Wu isn't the only one who has had to struggle with a slowing job market for tech.

The industry-wide layoffs, which began in end-2022, don't seem to have abated. Tech companies have continued to ax staff to streamline their operations.

Last week, Google CEO Sundar Pichai told staff to brace for more job cuts this year. Pichai said the layoffs were about "removing layers to simplify execution and drive velocity in some areas."

Wu did not immediately respond to a request for comment from Business Insider sent outside regular business hours.

Read this article:

Computer science major applied to 456 internships, got three offers - Business Insider

Read More..

OSU-Cascades professor receives $628,000 ODE award to boost K-12 students’ participation in computer science – KTVZ

Only 4% of high schools are taking the classes, 2% students female

BEND, Ore. (KTVZ) -- An Oregon State University-Cascades professor has been assigned a $628,000 award from the Oregon Department of Education to expand universally engaging, culturally appropriate and inclusive computer science education for K-12 students.

The award is part of a statewide plan which aims to ensure computer science education is available to public school students on an equitable basis and to broaden participation for all students by 2027-28. It has been funded in part through a $9.8 million federal grant awarded to the Oregon Department of Education. In addition to OSU-Cascades, the plan was also developed in partnership with the University of Oregon and Portland State University,

Roughly 41% of high schools in Oregon offer students a foundational computer science course, according to the Oregon Department of Education, but just 4% of the states high schoolers are taking one of those classes, and only 2% of those students are female.

An understanding of computing is necessary, foundational knowledge in a world where technology is part of our everyday lives, no matter our career choices, said the award administrator, Jill Hubbard, an associate professor of practice in computer science who also coordinates the degree program at OSU-Cascades.

This award will enhance Oregon K-12 teachers ability to familiarize every student, including underserved students, in their classrooms with computing," she said. "As a byproduct, society will benefit from more diversity in the technology workforce and in approaches to problem-solving using computing.

The award will build support structures that create systematic changes focused on equity and inclusion in computer science education.

According to Hubbard, many computer science teachers are tasked with creating curriculum with little professional development support. Often, courses are tied to specific educators, making it difficult to sustain courses from year to year, especially in rural areas.

The award will fund professional development in Exploring Computer Science, a national, equity-driven and research-based introductory high school curriculum shown to increase participation by underserved students. Hubbard said the award will also provide stipends for teachers to attend summer workshops and professional development throughout their first year of teaching the curriculum.

Additionally, the award will establish regional specialists to support local professional development communities of computer science teachers. It will also fund the development of resources and workshops to support K-12 school counselors and administrators, who help teachers ensure all students have access to computer science education, said Hubbard.

Were working to develop scalable, sustainable, and systemic solutions that intentionally increase participation in computer science by underserved students, Hubbard said.

Continue reading here:

OSU-Cascades professor receives $628,000 ODE award to boost K-12 students' participation in computer science - KTVZ

Read More..

AI learns to simulate how trees grow and shape in response to their environments – Tech Xplore

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

close

A research team from Purdue University's Department of Computer Science and Institute for Digital Forestry, with collaborator Sren Pirk at Kiel University in Germany, has discovered that artificial intelligence can simulate tree growth and shape.

The DNA molecule encodes both tree shape and environmental response in one tiny, subcellular package. In work inspired by DNA, Bedrich Benes, professor of computer science, and his associates developed novel AI models that compress the information required for encoding tree form into a megabyte-sized neural model.

After training, the AI models encode the local development of trees that can be used to generate complex tree models of several gigabytes of detailed geometry as an output.

In two papers, one published in ACM Transactions on Graphics and the other in IEEE Transactions on Visualizations and Computer Graphics, Benes and his co-authors describe how they created their tree-simulation AI models.

"The AI models learn from large data sets to mimic the intrinsic discovered behavior," Benes said.

Non-AI-based digital tree models are quite complicated, involving simulation algorithms that consider many mutually affecting nonlinear factors. Such models are needed in endeavors such as architecture and urban planning, as well as in the gaming and entertainment industries, to make designs more realistically appealing to their potential clients and audiences.

After working with AI models for nearly 10 years, Benes expected them to be able to significantly improve the existing methods for digital tree twins. The size of the models was surprising, however. "It's complex behavior, but it has been compressed to rather a small amount of data," he said.

Co-authors of the ACM Transactions on Graphics paper were Jae Joong Lee and Bosheng Li, Purdue graduate students in computer science. Co-authors of the IEEE Transactions on Visualization and Computer Graphics paper were Li and Xiaochen Zhou, also a Purdue graduate student in computer science; Songlin Fei, the Dean's Chair in Remote Sensing and director of the Institute for Digital Forestry; and Sren Pirk of Kiel University, Germany.

The researchers used deep learning, a branch of machine learning within AI, to generate growth models for maple, oak, pine, walnut and other tree species, both with and without leaves. Deep learning involves developing software that trains AI models to perform specified tasks through linked neural networks that attempt to mimic certain functionalities of the human brain.

"Although AI has become seemingly pervasive, thus far it has mostly proved highly successful in modeling 3D geometries unrelated to nature," Benes said. These include endeavors related to computer-aided design and improving algorithms for digital manufacturing.

"Getting a 3D geometry vegetation model has been an open problem in computer graphics for decades," stated Benes and his co-authors in their ACM Transactions paper. Although some approaches to simulating biological behaviors are improving, they noted, "simple methods that would quickly provide many 3D models of real trees are not readily available."

Experts with biological expertise have traditionally developed tree-growth simulations. They understand how trees interact with environmental conditions. Understanding these complicated interactions depends upon characteristics bestowed upon the tree by its DNA. These include branching angles, which are much larger for pines than for oaks, for example. The environment, meanwhile, dictates other characteristics that can result in the same type of tree grown under two different conditions displaying completely different shapes.

"Decoupling the tree's intrinsic properties and its environmental response is extremely complicated," Benes said. "We looked at thousands of trees, and we thought, 'Hey, let AI learn it.' And maybe we can then learn the essence of tree form with AI."

Scientists typically build models based on hypotheses and observations of nature. As models created by humans, they have reasoning behind them. The researchers' models generalize behavior from several thousand trees' worth of input data that became encoded within the AI. Then the researchers validate that the models behave the way the input data behave.

The AI tree models' weakness is that they lack training data that describes real-world 3D tree geometry.

"In our methods, we needed to generate the data. So our AI models are not simulating nature. They are simulating tree developmental algorithms," Benes said. He aspires to reconstruct 3D geometry data from real trees inside a computer.

"You take your cellphone, take a picture of a tree, and you get a 3D geometry inside the computer. It could be rotated. Zoom in. Zoom out," he said. "This is next. And it's perfectly aligned with the mission of digital forestry."

More information: Jae Joong Lee et al, Latent L-systems: Transformer-based Tree Generator, ACM Transactions on Graphics (2023). DOI: 10.1145/3627101

Xiaochen Zhou et al, DeepTree: Modeling Trees with Situated Latents, IEEE Transactions on Visualization and Computer Graphics (2023). DOI: 10.1109/TVCG.2023.3307887

Journal information: ACM Transactions on Graphics

Read more:

AI learns to simulate how trees grow and shape in response to their environments - Tech Xplore

Read More..

A Michigan computer science major says he sent out 456 internship applications and only got 3 offers – Business Insider India

Computer science major Oliver Wu says he pulled out all the stops in his quest for a summer internship.

"456 applications, 56 interviews, and 0 sleep in 4 months, all for 1 internship," the University of Michigan junior wrote in a TikTok post published on January 11.

Wu, whose post has been viewed more than 2.9 million times, told Newsweek that he started applying even before classes resumed in the fall.

"I started applying in July, and soon I hit 200 applications," Wu said, adding that he was making 15 to 20 applications a day before the school term began.

Wu told Newsweek that he did feel "burned out" from the long-drawn search, which saw him receive "hundreds of rejections."

"The hardest part was staying positive and working hard," Wu said.

But Wu remained unfazed and pressed on with his search.

"I did not want to feel regret that I could have tried harder, so I made up my mind to pursue this with everything I had," Wu told the outlet.

Wu's efforts eventually paid off. He secured three offers and will join Ford as an enterprise technology intern this summer.

"I was in class at the time, and I remember stepping out, going into the hallway, and jumping up and down while silently screaming in excitement for around 10 minutes," Wu recounted to Newsweek.

Wu isn't the only one who has had to struggle with a slowing job market for tech.

The industry-wide layoffs, which began in end-2022, don't seem to have abated. Tech companies have continued to ax staff to streamline their operations.

Last week, Google CEO Sundar Pichai told staff to brace for more job cuts this year. Pichai said the layoffs were about "removing layers to simplify execution and drive velocity in some areas."

Wu did not immediately respond to a request for comment from Business Insider sent outside regular business hours.

Go here to read the rest:

A Michigan computer science major says he sent out 456 internship applications and only got 3 offers - Business Insider India

Read More..

Chattopadhyay receives the NAS Michael and Sheila Held Prize | Cornell Chronicle – Cornell Chronicle

The National Academy of Sciences (NAS) has selected Eshan Chattopadhyay, assistant professor of computer science in the Cornell Ann S. Bowers College of Computing and Information Science, as the 2024 co-recipient of the Michael and Sheila Held Prize, along with collaborator David Zuckerman, professor of computer science at the University of Texas at Austin, for their novel work on randomized algorithms.

Given annually since 2018, the Held Prize honors "outstanding, innovative, creative, and influential research in the areas of combinatorial and discrete optimization, or related parts of computer science." The pair will receive the award, along with a $100,000 prize, at the NAS 161st Annual Meeting on April 28.

"I am deeply honored to receive this award and to have my work with David on randomness extractors recognized by the larger scientific community," Chattopadhyay said.

Read the entire story on the Cornell Bowers CIS website.

See the rest here:

Chattopadhyay receives the NAS Michael and Sheila Held Prize | Cornell Chronicle - Cornell Chronicle

Read More..

Faulkner University News Faulkner University to Add Computer Engineering Degree in Fall 2024 – Faulkner University

Mike Herridge, right, speaks with WSFA news crew about Faulkners new degree.

Computer engineering is coming to Faulkner in fall 2024as the university officially adds to its offerings one of the most in-demand degrees according tocollegeconsensus.com.

Faulkners brand-new Bachelor of Science in Computer Engineering will be added to the universitys ever-popular computer science department, which for the last six years has boasted 100 percent job-placement for its graduates.

Computer engineering will fuse computer science and electrical engineering to equip students with the skills employers are looking for, ensuring theyre ready to bridge the gap between programming and the real world.

Some common employers for computer engineers are Microsoft, Cisco, Amazon, Telsa, SpaceX, General Motors, and Cummins.

With a grounding in Christian perspectives and morals, students gain preparation for careers in:

Space exploration design and control

Robotic design and application

Automotive Controls

Human to Technology Interfacing like Alexa or Hey Google

Industrial Process Control

After graduating from our program, students with a computer engineering degree will be able to marry software with hardware. Theyll be prepared to write desktop applications, develop software, create hardware as well as control and develop the programs and components that run planes, cars and rockets, saidMike Herridge, chair of the computer science department. Overall, our students from the department of computer science at Faulkner are ready to create and maintain any consumer-facing or back-end applications for industry and business. Today, every industry requires a computer expert at the heart of its business.

Technologyis growing, changing and expanding rapidly. Technology needsproblem solversandinnovators. Those students who enjoy the challenge of solving problems, creating new ways of doing things, or staying on top of the latest technology, thenFaulkners Computer ScienceDepartmentis the place to be.

The Computer Science Department of Faulkner University ishands-onwith a strong emphasis on the fundamentals of programming, hardware and software. Students will learn from instructors who work in the field and can advise on what is needed to know to be valuable in the industry.

Students who are interested in Faulkners new degree can apply now atwww.faulkner.edu/apply.

Read the original post:

Faulkner University News Faulkner University to Add Computer Engineering Degree in Fall 2024 - Faulkner University

Read More..

AI not yet ready to replace human workers, says MIT report – The Boston Globe

In a paper that has been submitted to the journal Management Science, Thompsons team concluded that its not enough for AI systems to be good at tasks now performed by people. The system must be good enough to justify the cost of installing it and redesigning the way the job is done. Thompson said that for now, there are a lot of places where ... humans are a more cost-efficient way to do that.

The researchers focused on AI applications that use machine vision, the ability of computers to understand visual data such as pictures and video. For instance, many smartphones have a machine vision feature that can identify objects that have been photographed by the phones camera.

The researchers chose machine vision because its a well-established technology thats already used for many tasks, like reading labels or inspecting manufactured goods for defects.

Over a three-year period beginning in 2020, the researchers identified 420 commercial and industrial tasks where an AI system with machine vision might be capable of supplanting human workers. They conducted online surveys of people with expert knowledge of how each task was performed, to figure out how much a business might lower its costs by substituting machine vision for human labor.

The researchers found that in most cases, the visual aspect of a job was just a small part of a workers responsibilities. For instance, bakers must visually inspect the loaves of bread theyre producing. But bakers spend only about 6 percent of their time doing this. Shifting this job to AI might enable the bakery to get by with fewer workers. But the cost of todays machine vision systems is much too high to justify cutting labor costs by just 6 percent. Only if AI becomes much cheaper would the tradeoff make sense.

Applying the same measurements to all 420 tasks, Thompson and his colleagues found that installing machine vision in place of human labor would lead to lower overall costs only 23 percent of the time. In the great majority of cases, human workers were cheaper.

Of course, these results apply only to the current state of the art. Thompson noted that AI systems keep getting better and cheaper. Over time, theyll be cost-effective at a much larger number of jobs.

In addition, the study only looks at machine vision applications. It doesnt include research into the impacts of the latest generative AI systems, such as ChatGPT. Thompson said his group is now working on ways to assess the potential impact of these systems. But he noted that this might take quite a while, because figuring out the potential cost savings for thousands of specific tasks requires a great deal of research.

While the new research paper has yet to go through peer review, other academics think Thompsons team is on the right track.

Christoph Riedl, associate professor of supply chain and information management systems at Northeastern University, called the study absolutely plausible. Riedl noted that it fits the pattern of other major innovations that only achieved their full potential many years after they were first invented. When we started using electricity to run factories instead of steam engines, switching to electricity didnt make factories more productive, said Riedl. We had to learn how to use this new technology to gain any benefits from it.

Joseph Fuller, a professor at Harvard Business School, where he studies the future of work, said that the latest AI innovations will have dramatic impacts on some jobs. For instance, he said generative AI is already so good at creating computer software that it will reduce demand for human programmers. Its relatively easy to teach an AI how to write software just feed it lots of examples created by humans.

But teaching an AI how to manufacture a complex object is far more difficult. I cant unleash it on some database on the accuracy of manufacturing processes, said Fuller. There is no such data, so I have nothing to train it on.

Hiawatha Bray can be reached at hiawatha.bray@globe.com. Follow him @GlobeTechLab.

Link:

AI not yet ready to replace human workers, says MIT report - The Boston Globe

Read More..