Page 2,539«..1020..2,5382,5392,5402,541..2,5502,560..»

UA researchers set to take part in energy study – Arkansas Online

FAYETTEVILLE -- Researchers at the University of Arkansas, Fayetteville will study how artificial intelligence can be used to make energy infrastructure more responsive and resilient, according to grant award information published by the National Science Foundation.

North Dakota State University is leading the multidisciplinary effort funded by a $5.98 million grant. UA researchers will receive $1.45 million in grant money in support of the four-year project, the university announced Monday.

Haitao Liao, a UA industrial engineering professor, is among the principal investigators for the effort.

Energy infrastructure is a term used to refer to power plants and transmission lines that allow users to access electricity.

"As was seen from the cold weather-related blackouts and disruptions in Texas during February, it is imperative to build resilience into energy delivery systems," Roy McCann, a UA electrical engineering professor, said in a statement.

Efforts have ramped up at the National Science Foundation and within the federal government more broadly to take an active role in supporting artificial intelligence research, also known as AI research.

A federal law that took effect on Jan. 1 established the National AI Initiative to coordinate research on artificial intelligence, defined in the law as referring to "a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments."

The recently funded effort involving six UA researchers aims "to investigate the potential of AI as a driving force for bringing about radical changes to critical infrastructures and industries," according to a grant abstract published on the National Science Foundation's website.

The UA researchers are being led by Liao. Along with McCann, they include Ed Pohl, an industrial engineering professor; Xiao Liu, an industrial engineering assistant professor; Xintao Wu, a computer science and computer engineering professor; and Yue Zhao, an electrical engineering associate professor.

Researchers at the University of Nevada-Las Vegas and Nueta Hidatsa Sahnish College are also collaborating on the project.

More here:

UA researchers set to take part in energy study - Arkansas Online

Read More..

Moving to the cloud: Resistance is futile but not for everyone – TechGenix

No. 1 on Gartners top 10 strategic technologies for 2010 was cloud computing. It was once again top of the list in 2011, but then 2012 saw cloud computing demoted to No. 10. It danced around the list over the next few years and eventually started to take on different personas. Personal cloud was given a middle-of-the-road placing in 2013, and then in 2014, it advanced into the architecture behind the cloud. Makes sense. And then 2015 introduced us to the concept of computing everywhere. If we do a little bit of forensic analysis, it seems that by 2016 enterprise organizations had embraced the cloud, and it moved from trendy to operational.

The reasons are many. Offloading the heavy lifting required to acquire, set up, and maintain the internal infrastructure required for todays technology is a huge weight off any corporate executives shoulders. The ability to access applications and data regardless of location or time zone means an increase in productivity. There is an often-heard argument that cloud computing is more cost-effective, although there is always another side to that argument. Nonetheless, many enterprise organizations have made the move to the cloud. But not all. One cannot help but wonder, with all of the positive press around the cloud, not to mention the skilled sales tactics employed by profit-maximizing vendors, why there still are organizations resisting the urge to jump on board and migrate to the cloud?

Stripped of all the fancy wrapping, the cloud is really just someone elses server. Which means that the security around the service that is provided is only as good as the practices employed, not to mention the skillsets of service provider employees. Dropbox, OneDrive, and Google, oh my! The increased use of cloud platforms has also meant an increase in security concerns. Insecure APIs, external attacks, and compliance risks are all at the forefront of concerns contemplated by cybersecurity professionals. But it is also true that we would have these same concerns if our servers all resided in an onsite locked room. The advantage to hackers is that data is now centralized and therefore creates a rather irresistible target. So perhaps security concerns are a wash. We will have the same concerns no matter where our data resides. The thought that when servers are onsite, we can maintain greater control is true. It also means that we have the fiscal and legal responsibility to ensure the security of that data. To some, it is less stressful to contract that responsibility to a service provider dedicated to overseeing security. Overall, service providers are quite disciplined when it comes to applying security patches and updates. Which is something that is known to not always be true for localized IT departments.

Companies are seldom willing to give up those requirements that make them unique. To move to a hosted application in the cloud means conforming to the configuration limitations imposed by the vendor. Service providers need to do this. If they were to offer customization to every customer, their offering would be much too expensive to be attractive to the average corporate consumer. In addition, it would become impossible for them to maintain upgrades. Only those organizations that are willing to let go of nonstandardized business processes can benefit from migrating applications. Of importance is the knowledge that repeatable and standardized processes are much less expensive to operate and maintain. While it is true that onsite applications can also be developed by disciplined and mature organizations to follow documented and standardized processes, it is also expensive for organizations to build workarounds on the fly. And these workarounds, far too often, turn into the regular process.

An often-heard argument against cloud service providers is the limited access to data. And it is true that organizations with high turnover may find themselves at a disadvantage when choosing to move to a new service provider. In an effort to keep our data secure, cloud service providers will ask their customers to identify by name those who will require different levels of access to data. It is administratively heavy to change these contact names for obvious reasons, and we do want it to be a difficult procedure with many checks and balances. Unfortunately, this can be construed as not having access to data that is rightfully owned by the customer. On the other side of this equation, it is important to ensure that during the procurement process, it is clear that the data is owned by the customer and we have access to it as required. This includes the ability to download all required data should we choose to change our service provider.

Pixabay

Yet another data consideration is what vendors feel entitled to do with the data entered and stored by organizations. While legal compliance, in theory, represents the rights of customers, it is important to understand that we need to have a legal interpretation of the contract we are signing. This is yet another risk that smaller organizations encounter. It is an expense that can become unmanageable with less respectable vendors.

Can your server connection handle moving the kind of data that your organization requires? Connectivity is one of the more controversial topics of cloud services. It is also one that is often not discovered until after go-live, which can be the cause of a very poor user experience. When selecting a cloud service provider, there are a few requirements that can help to alleviate this issue. First of all, select a service provider that has multiple datacenters. Latency can be caused by distance, and it is a good idea to know that you will have the option to move to a datacenter in a different location if this turns out to be a persistent issue. Another issue may have nothing to do with cloud anything. Ensure that the bandwidth of your Internet service provider is understood and that it meets the suggested requirements of the cloud service provider. And then, of course, there is the bandwidth of your internal network. This needs to be tested before cutover.

Enterprise technology projects are expensive and time-consuming. They also require the engagement of many operational team resources and stakeholders. There are times in every organizations life when it is wise not to take on another large project. Timing is everything. If adequate resourcing and timing are not allocated, a poorly planned configuration can cause a headache that could stick around for a very long while.

The ability to host applications in the cloud, usually via a vendor that provides the application as a service, has been a savior in current times. The ability to access applications from any location, not to mention removing the need for a server room fully staffed with expensive technology resources, has been an advantage under our current work-from-home regime. The concept is solid, and the positive results have been proven. However, there are legit reasons that enterprise organizations may hold off migration projects. Overall, there does seem to be a direct correlation with trust. Trust of vendor partners and cloud service providers. Trust that our Internet service providers can handle the constant flow of data packets we will need to send, and even trust that our stakeholders have built and can maintain standardized and repeatable processes. While technology continues to offer new solutions to compensate for the constant challenges thrown at the business world, it seems that the issues that keep a CIO awake at night have not changed much over the years.

Featured image: Shutterstock

Post Views:17

Home Articles Moving to the cloud: Resistance is futile but not for everyone

Continue reading here:
Moving to the cloud: Resistance is futile but not for everyone - TechGenix

Read More..

Real words or Buzzwords?: Cloud Native IoT – SecurityInfoWatch

Oct. 26, 2021

A continuing look at what it means to have a 'True Cloud' solution and its impact on todays physical security technologies

(Image courtesy bigstockphoto.com)

(Image courtesy bigstockphoto.com/agsandrew)

(Image courtesy bigstockphoto.com/blackboard)

(Image courtesy bigstockphoto.com/Beebright)

(Image courtesy bigstockphoto.com/wutzkoh)

This site requires you to register or login to post a comment.

No comments have been added yet. Want to start the conversation?

(Image courtesy bigstockphoto.com/agsandrew)

(Image courtesy bigstockphoto.com/blackboard)

(Image courtesy bigstockphoto.com/Beebright)

(Image courtesy bigstockphoto.com/wutzkoh)

Courtesy of Getty Images -- Credit: ra2studio

(Sara Jerde/NJ Advance Media/TNS)

(Image courtesy bigstockphoto.com)

(Image courtesy bigstockphoto.com)

(Mike Stocker/Sun Sentinel/TNS)

Sign up for Security Info Watch eNewsletters

See the original post here:
Real words or Buzzwords?: Cloud Native IoT - SecurityInfoWatch

Read More..

Endless regression: hardware goes virtual on the cloud – E&T Magazine

In the summer of 2018, professors John Hennessy and David Patterson declared a glorious future for custom hardware. The pair had picked up the Association for Computing Machinerys Turing Award for 2017 for their roles in the development of the reduced instruction set computer (RISC) architectural style in the 1980s.

Towards the end of their acceptance speech, Patterson pointed to the availability of hardware in the cloud as one reason why development of custom chips and the boards they would be soldered onto is getting more accessible. Cloud servers can be used to simulate designs on-demand and, if you have enough dollars to spend, you can simulate a lot of them in parallel to run different tests. If the simulation does not run quickly enough, you can move some or all of the design into field-programmable gate arrays (FPGAs). These programmable logic devices wont handle the same clock rates as a custom chip but they might only be five or ten times slower, particularly if the design you have in mind is some kind of sensor for the internet of things (IoT), where cost and energy are more important factors than breakneck performance.

The great news that's happened over the last few years is that there's instances of FPGA in the clouds, said Patterson. You dont have to buy hardware to do FPGAs: you can just go the cloud and use it. Somebody else sets it all up and maintains it.

A second aspect of this movement is being driven by projects such as OpenROAD organised by the US defence agency DARPA. This aims to build a portfolio of open-source hardware-design tools that lets smaller companies create chips for their own boards instead of relying on off-the-shelf silicon. In principle, that would make it easier to compete with bigger suppliers who traditionally have been able to deploy customisation to improve per-unit costs.

For more than a decade, those bigger silicon suppliers have used simulation to deal with one of the main headaches in custom-chip creation. Getting the hardware to boot up and run correctly is one thing. Getting the software to run often winds up a more expensive part of the overall project. As debugging software for a chip that doesnt exist yet is tricky, they turned to simulation to handle that. Even if the hardware is not fully defined, it is often possible to use abstractions to run early versions of the software, which is then gradually refined as the details become clearer. The old way of handling that was to use some hardware and FPGA combination that approximated the final design and have it running on a nearby bench. That is changing to where its not just hardware designers running simulations, its increasingly the software team.

When we started 12 or 13 years ago, everyone was doing simulation for hardware to get the SoC to work, says Simon Davidmann, president of Imperas, a company that creates software models of processor cores. We founded Imperas to bring these EDA technologies into the world of the software developers. We learned with Codesign [Davidmanns previous company] that software development would become more like the hardware space.

A second trend is the pull of the cloud. The designs may run on models that trade off accuracy for speed on a server processor in the cloud or a model loaded into an FPGA or a mixture of both. As Imperas and others can tune their models for performance by closely matching the emulated instructions to those run by the physical processor, a typical mixture is to have a custom hardware accelerator and peripherals emulated in the FPGA and the microprocessors in fast software models.

Davidmann says the trend towards the use of more agile development approaches in the embedded space is driving greater use of simulation. Even hardware design, which does not seem a good fit for a development practice that relies on progressive changes to requirements and implementations, has used them. One of the main reasons for this is the extensive use of automated testing. Whenever code whether its hardware description or software lines gets checked in, the development environment does a bunch of quick tests with more scheduled for the nighttime. If the new code triggers new bugs, it gets sent back. If not, the developer can continue.

This continuous integration and test relies on servers being available and ready to run the emulations and simulations whenever needed. That, in turn, points to the cloud, as it is easy to spin up processors for a battery of tests on demand. Even if the target hardware has finally come back from the fab, simulation still gets used. Though one way to test in bulk on finished hardware is to run device farms basically shelves stacked with the target boards and systems they present maintenance issues. They are always breaking and often have the wrong version of the firmware, Davidmann says. Moving to continuous integration doesnt work that well with hardware prototypes.

You can quickly push new versions to simulations in the cloud, turn them off and on again virtually. And, funds allowing, run many of them in parallel, which can be vital if a team has to meet a shipping deadline with a shipment-ready form of the firmware.

Now, the use of simulation is moving even further into the lifecycle, as evidenced by Arms launch of its Virtual Hardware initiative last week. The core technology underneath this is the same as that used to support conventional chip designs, including fast processor models similar to those provided by Imperas and others.

In its current form, Arm Virtual Hardware is limited in terms of the processors it supports. The off-the-shelf implementation thats in a free beta programme covers just one processor combination: the recently launched Cortex-M55 and its companion machine-learning accelerator. The presence of the accelerator provides much of the motivation for the virtual-hardware programme.

Stefano Cadario, director of software product development said at Arms developers summit last week, one of the driving forces behind the programme is the steep increase in the complexity of software with several factors: managing security, over-the-air updates and machine learning.

Where so much of the interaction the embedded device has is with cloud servers that deliver software updates as well as authenticating transaction, it makes sense to be able to run and debug that in the cloud. But machine learning presents a situation where updates will be far more frequent than they are today. The models will typically be trained off-device on cloud servers as the target hardware does not have the performance or raw data to do the job itself. Potentially, devices could get updated models every night, though the frequency will most likely be a lot lower than that.

Development teams need to be sure that a new model wont upset other software when loaded, which points to regression testing being used extensively on simulated hardware in the cloud. That automated testing potentially makes it possible for the machine-learning models to be updated by specialist data scientists without the direct involvement of software writers, unless there is a big enough change to warrant it. The result is a situation where Arm expects customers to routinely maintain cloud simulations for years, through the entire lifecycle of the production hardware.

As with existing virtual-processor models, the Arm implementation makes it possible to gauge performance before a chip has made it back from the fab. According to Cadario, Cambridge Consultants used an early-access version to test the software for a medical device and Googles Tensorflow team optimised the machine-learning library for the accelerator earlier in the development cycle than they would normally.

Arm has not yet said which, if any, other processors would be added to the programme. However, it seems likely that it will not go outside the companys own portfolio. Where we are different is that we support heterogeneous platforms, Davidmann says. Weve got some of the largest software developments using our stuff because it can support heterogeneous implementations.

There will still be a place for prototype hardware, not least because field trials of ideas will still have to take place before suppliers commit to hardware. But if there is a push towards the use of more custom hardware, it will be cloud simulation that helps drive it.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Read the rest here:
Endless regression: hardware goes virtual on the cloud - E&T Magazine

Read More..

How bare metal servers enhance the work of online media – www.computing.co.uk

Content production is increasing rapidly, and so are data transfer speeds. Providers who are eager to stimulate demand for their services must consider users' requests. Dedicated servers as-a-service, which we are all accustomed to, are the best option to choose.

Verizon Media makes use of hundreds of thousands of machines with more than 4 million cores. Two years ago, Verizon Media senior director architect James Penick said, "We've understood that we need to build a foundation before building the house. Bare metal servers form the basis of our infrastructure. It's like hardening the building using concrete and reinforcing bars."

Bare metal servers help Verizon Media to create the ideal infrastructure for performance optimisation and carry out standard quota requests within a few minutes.

A bare metal server is a physical server located in the cloud. Unlike virtual machines, bare metal servers apply single-user control over the equipment. The client can manage all hardware resources alone, control the server load directly, and work independently from other users' virtual machines.

Bare metal servers can be used in many different ways: you can either deploy virtual machines, or dedicate the whole node to just one project as well as to multiple containers. Bare metal servers are often chosen by media platform developers and those working on applications that require high speed and data security. Let's discuss why.

Unlike 'virtual machines', dedicated nodes can cope with resource-intensive tasks more efficiently. According to research, the performance of virtual machines used workloads requiring high data processing speed is up to 17 per cent lower than bare metal servers. That's because dedicated server users have full access to their servers and can use all computing resources on their own.

We even knew this back in 2015. In his op-ed column on TVTech, Media Systems Consulting founder Al Kovalick emphasised the advantages of productive dedicated servers for the media industry.

"Bare metal servers have no virtualisation layer. The workloads are deployed on the servers with an OS that has certain preliminary configurations, but the cloud provider doesn't introduce any additional software. It's up to the user to define which software stack to use over the OS. These servers are managed through a control panel that monitors the deployment process, the server capacity and how the server is used. Dedicated servers have maximum performance. The user controls the entire software stack, and there is no virtualisation tax."

Some cloud providers allow you to use high-performance NVM disks and Intel Xeon Scalable processors of the 3rd generation (Ice Lake). For example, we started integrating these processors into our infrastructure in April 2021, together with other pioneers. Such equipment allows you to solve any problem quickly.

Imagine that you're launching a new 'Twitch killer.' According to twitchtracker.com, this year the video streaming platform's users have viewed more than 1 trillion minutes of video content. Nowadays an average 720p ('HD') video takes about 900 MB per hour, meaning that Twitch servers send users over 46.5 million gigabytes every day. These are approximate calculations, but this is a good example of what kind of server infrastructure an "average entertainment portal" needs.

According to the Digital 2021 United States Of America report, this year the average download speed on mobile and stationary devices in the United States reached 67.3 MB/second and 174.6 MB/second respectively.

Content providers need to satisfy users' growing demands and to consider their opportunities. They must compete with giants like Netflix if they want to gain users' attention in the media space. Thus content delivery speed and quality must be at a high level, which requires infrastructure with the corresponding bandwidth - which is most achievable with bare metal servers.

According to the Ericsson Mobility Report, 63 per cent of mobile traffic comes from videos. By 2025, this value will have reached 76 per cent. The growing market share of streaming media is responsible for such rapid changes.

If media platforms, services and portals want to be successful in the growing content consumption market, they need to provide maximum performance. At peak load times, i.e. during calendar events, significant sports broadcasts, etc., the opportunity to choose and configure all equipment components becomes even more important than ever.

In the G-Core Labs' public cloud - Bare-Metal-as-a-Service - customers can automate all of these tasks (including equipment configuration, orchestration, and adding new dedicated servers) using APIs. This will allow you to scale your platform quickly and to make sure it meets your clients' resource needs.

"Unexpected performance decrease is what people dislike about cloud computing," saidMark Stymer, President of Dragon Slayer Consulting. "If you opt for a bare metal server, your server's performance is more predictable, and the provider can scale it up on demand and do it relatively quickly."

If you publish a lot of content, you'd better choose a provider with a broad points of presence network. The equipment should preferably be located in Tier III or IV data centres at larger traffic exchange points. In this case, there will be no need to worry about high content delivery speed and reliability.

Level

Idle time (hours per year)

Fault tolerance value (%)

Tier I

28.8

99.671

Tier II

22

99.741

Tier III

1.6

99.982

Tier IV

0.4

99.995

Data centre levels, as classified by Uptime Institute

Many media platforms have certain user security policies. No matter if it's a corporate social network or an adult content site, user data and content must have reliable protection. Recall the 15 biggest data breaches of the 21st century - and bear in mind that start-up projects constantly undergo several minor risks.

G-Core Labs' users have access to enhanced protection against DDoS attacks. In case of a server attack, the traffic is redirected to a threat mitigation system (TMS), which detects attacks, filters the traffic and allows only harmless data to reach the server. Users can configure the TMS protection policies on their own. The main advantage is that the IP doesn't get blocked during an attack, and the server remains accessible to users.

When creating a media business, be it a streaming platform, a small independent media source publishing its content, or a photo and video hosting site, you need to decide how to optimise the company's resources. Dedicated servers provide excellent opportunities to achieve this. The providers' pricing policies allow you to configure the required server precisely enough, and as a result you won't have to overpay for any idle megabit of memory. Cost predictability is another advantage provided by bare metal servers.

For example, our dedicated server tariff enables you to select individual parameters. You can install RAID or change its type, increase the disk volume and the number of disks, increase RAM volume, replace SSDs with HDDs and vice versa, and install 10 Gbps network cards. You can also configure hardware with the engineers' help.

Internet tariff parameters including traffic package and bandwidth size can be changed independently. Changing to another tariff requires no extra payment.

See the rest here:
How bare metal servers enhance the work of online media - http://www.computing.co.uk

Read More..

Wells Fargo has a new virtual assistant in the works named Fargo – CNBC

A Wells Fargo logo is seen at the SIBOS banking and financial conference in Toronto

Chris Helgren | Reuters

Wells Fargo is developing a virtual assistant to help it convert more retail banking customers into digital users, CNBC has learned.

The assistant, named Fargo, will be able to execute tasks including paying bills, sending money and offering transaction details and budgeting advice, according to Michelle Moore, the bank's consumer digital head. It's expected to be out next year after the bank releases a revamped mobile app and website in early 2022, she said.

The move by Wells Fargo, a consumer banking giant with more branches than any lender except JPMorgan Chase, is part of a broader technology overhaul under CEO Charles Scharf. Updating the bank's aging systems has been a priority for Scharf since becoming chief executive two years ago, as well as a key part of the turnaround needed after the bank's 2016 fake accounts scandal. Last month, Wells Fargo announced a decade-long plan to move computing to Google and Microsoft cloud servers.

Michelle Moore input Consumer Digital head at Wells Fargo

Source: Wells Fargo

"Everyone lives on their phone, and there's an expectation on how things should work," Moore said in a Zoom interview. "Our clients were telling us that our app was not easy to use, it's not intuitive, there were too many dead ends and clients were getting stuck."

While it had the most extensive brick-and-mortar presence of any U.S. bank for years, only being eclipsed in branch count last quarter by JPMorgan, Wells Fargo trails rivals in digital adoption. Regulators have criticized the firm's technology systems, and a 2019 mishap at a Minnesota data center knocked out customers' mobile and web access for hours.

Its 27 million active mobile users are fewer than those of JPMorgan and Bank of America. Despite the boost that the coronavirus pandemic provided for all things digital, Wells Fargo's 4.2% user growth in the past year is less than half JPMorgan's gains. Studies have shown that digital users are typically more satisfied with their banks, cheaper to serve and less likely to switch providers.

That's probably why Wells Fargo recruited Moore late last year. She is a Bank of America technology veteran who helped develop the company's own virtual assistant, known as Erica. That artificial intelligence-powered service has seen its use surge during the pandemic, tripling the number of interactions to 104.6 million in the past year, Bank of America said this month.

Early this year, Wells Fargo began studying why customers resorted to calling phone help lines and where the bank's app failed them, Moore said. She added that the redesigned app has a simpler login and consolidates payment options, whereas previously they were scattered throughout. Moore also said that future versions will be more capable, as part of the company's new digital-first efforts.

Wells Fargo revamped banking app.

Source: Wells Fargo

"We can help clients really live their lives and be more than checking balances and moving money," Moore said. "We want to be integrated and we want to help clients do their investments or buy their first house."

As for the name of the bank's virtual assistant, Moore said it was an obvious choice.

"We weren't trying to create a new brand or persona here," she said. "There's a lot you can do with 'Fargo.' Flip the word around, you can 'Go Far.' Let Fargo take you far."

Go here to read the rest:
Wells Fargo has a new virtual assistant in the works named Fargo - CNBC

Read More..

Facebook is spending more, and these companies are getting the money – MarketWatch

Facebook Inc. plans a spending spree for next year that could give a boost to networking providers and chip companies.

The social-media giant disclosed Monday that it expects capital expenditures of $29 billion to $34 billion in 2022, up from an estimated $19 billion in 2021. The 2022 forecast came in significantly above the FactSet consensus of $23.3 billion from prior to Facebooks FB, -3.92% Monday afternoon earnings call.

Dont miss: Facebook offers a needed dose of pain relief in the face of Apple privacy challenges

The anticipated bump is driven by our investments in data centers, servers, network infrastructure and office facilities, Chief Financial Officer David Wehner said on the call. One big factor behind the spending growth is increased investments in artificial intelligence and machine learning, according to Wehner, as Facebook looks to enhance its recommendations and ad performance.

See also: Facebook earnings top $9 billion, but Apple change puts sales in the hot seat

Several big hardware and infrastructure companies could gain as Facebook opens its wallet, analysts say. Shares of Arista Networks Inc. ANET, +4.55% are up 4.3% in Tuesday trading, while shares of Cisco Systems Inc. CSCO, +1.22% are up 1.9%, with both seen as potential beneficiaries.

This is a very positive read for Arista as Facebook is one of the two cloud titans that account for a large portion of Arista revenue, Evercore ISI analyst Amit Daryanani wrote in a note to clients. This is also a positive for Cisco, to a lesser extent, as we think they may gain some share at Facebook in the 2022/23 time frame.

Wells Fargos Aaron Rakers agreed that Arista could benefit from Facebooks heightened spending as the company has been Aristas second-largest customer after Microsoft Corp. MSFT, +0.64%

Facebook has come in a few billion dollars below its capital-expenditure forecasts in recent years, according Rakers, though he still sees the companys commentary as one notable positive derivative data point as it relates to cloud capex spending trends into next year.

He was also encouraged by Facebooks commentary around its plans to spend up on solutions that help with artificial intelligence, a trend that could help Nvidia Corp. NVDA, +6.70%

Recently a semiconductor industry analyst had noted that Facebook would be going all in on NVIDIA GPUs vs. using Intels Habana solutions, while also noting that it expects to deploy Intels Mount Evans IPUs (Infrastructure Processing Units) potentially in every server, Rakers wrote.

He added that the companys discussion should be considered a positive for the overall server CPU market but that its particularly interesting in light of continued expectations that AMD could announce Facebook as a new meaningful customer with its third generation EPYC Milan CPUs going forward.

Nvidia shares are up 6.3% in Tuesdays session, and the company passed the $600-billion market-capitalization threshold for the first time in intraday trading. Shares of Advanced Micro Devices Inc. AMD, +0.47% are up 1.0%.

Facebooks outlook also bodes well for storage companies, several analysts said. Susquehanna analyst Mehdi Hosseini saw the forecast as consistent with his expectation that server and storage builds will show better-than-seasonal trends in the first half of 2022.

Wells Fargos Rakers said that the commentary remains a positive for the HDD [hard-disk drive] industry (WDC & STX) with cloud- driven nearline HDDs now accounting for ~60%+ of total HDD industry revenue.

Western Digital Corp. WDC, -0.90% shares are off 0.7% in Tuesday trading, while Seagate Technology Holdings PLC STX, -1.22% shares are down 2.7%.

Here is the original post:
Facebook is spending more, and these companies are getting the money - MarketWatch

Read More..

Cybersecurity to server admin in Linux with this 12-course bundle – BleepingComputer

By BleepingComputer Deals

Most websites today use servers that run Linux. You can also find this open-source OS in cybersecurity, and virtualized in the cloud.

In other words, if you plan to work in technology, there are many good reasons to learn Linux.

The Complete 2021 Learn Linux Bundle helps you master the system, with 12 full-length video courses working towards a respected certification. The training is worth $3,540 in total, but you can get it today for only $59 at Bleeping Computer Deals.

Aside from the fact that Linux is free, there are several reasons why this operating system is popular. First, its secure. Every cybersecurity professional runs Kali Linux on desktop. Second, its flexible. There are no walled gardens here. Third, Linux was made for hacking.

This bundle helps you to explore all three key features, and learn valuable tech skills along the way. You get 120 hours of hands-on tutorials in total, delivered by genuine Linux experts.

You start with the fundamentals: how to install Linux, choose your distro, navigate the OS, and use popular apps. The training then shows you how to configure Linux in the cloud, and you pick up key server admin knowledge.

Other courses focus on Linux automation with scripting, and important security techniques. Just as importantly, you get full prep for the CompTIA Linux+ exam. This will impress recruiters around the world.

All the courses come from iCollege, an online learning platform that has helped IT professionals in 120 countries since 2003.

Order today for just $59 to get lifetime access to all 12 courses, and save over $3,400 on the training!

Prices subject to change.

Disclosure: This is a StackCommerce deal in partnership with BleepingComputer.com. In order to participate in this deal or giveaway you are required to register an account in our StackCommerce store. To learn more about how StackCommerce handles your registration information please see the StackCommerce Privacy Policy. Furthermore, BleepingComputer.com earns a commission for every sale made through StackCommerce.

Link:
Cybersecurity to server admin in Linux with this 12-course bundle - BleepingComputer

Read More..

"Unified Technology Solution" – An InfoNetworks Service that Delivers Managed IT & Network Security Plus Voice and Internet Solutions -…

LOS ANGELES, October 26, 2021--(BUSINESS WIRE)--InfoNetworks today announced a new and unique service called "Unified Technology Solution." Promoted as the answer to fill an existing void in the marketplace, InfoNetworks Unified Technology Solution offers businesses managed IT services, complete network security, voice and telephony services, and connectivity via a complete package from a single provider.

For more than a year, businesses worldwide have faced unprecedented global events that are dictating policies and procedures. Companies have necessarily cut key budget items, face new challenges, and manage their businesses with reduced workforce. Many of these organizations have been tasked with creating remote infrastructure to help mitigate the ever-changing landscape and support work-from-home or hybrid work environments.

InfoNetworks Unified Technology Solution is designed to address these challenges with an all-inclusive platform that allows employees, managers, and executives to stay connected and secure both in the office and remotely. InfoNetworks data connections support the added influx of traffic to the office while the included cloud-based PBX allows for extensions to be accessible via mobile device or laptop. The Unified Technology Solution network supports a mix of Desktop, Softphones, Teams, SIP and PRI interfaces. All technologies are managed by InfoNetworks experienced Technical Support and Network Engineering Teams and are monitored 24 hours a day, seven days a week by the watchful eye of CyberSecure(SM), an advanced Network Security Software capable of locking-down up to 500,000 end points.

"Our Unified Technology Solution is a four-pronged approach," said Bruce Hakimi, Senior Executive at InfoNetworks. "By delivering Managed IT, Network Security, Voice and Data under one source, we can maximize the efficiency and productivity of any organization." He further explained: "By being able to oversee all network elements from the data connection to internal Local or Cloud based Network, InfoNetworks has the advantage of acting and resolving issues quickly without having to wait for other vendors."

Story continues

Although some data carriers may offer a cloud infrastructure, it is not a true Managed IT service. Their support is mostly limited to their equipment and servers and does not cover software applications, internal equipment such as PCs, Laptops, Printers, scanners, WiFi Routers and internal network security. If a printer is not working, a server is down or a laptop is hacked, their help desk will not assist. InfoNetworks Unified Technology Solution offers full LAN support giving businesses the advantages of having an IT team at their fingertips without the overhead cost.

"It is like having an in-house IT Department that manages and maintains your entire network, from your voice services to your laptops," said Hakimi. "Just think about it: how many companies can direct you to one support number for every type of trouble on your platform from your internet being down to an issue with your Network Security?"

View source version on businesswire.com: https://www.businesswire.com/news/home/20211026005480/en/

Contacts

Francesca Avincola - InfoNetworks Media Relationsfrancesca.avincola@infonetworks.com 310-203-9900 Ext. 103

Read more:
"Unified Technology Solution" - An InfoNetworks Service that Delivers Managed IT & Network Security Plus Voice and Internet Solutions -...

Read More..

Salesforce to ramp up hiring by 1,500 by end of this year – Mint

BENGALURU :India is a critical market for cloud-based software company Salesforce.com Inc., having grown to be the largest centre outside of its headquarters in San Francisco. The India unit is currently hiring to address growing demand from its customers, especially small and medium businesses (SMBs). In an interview, Arundhati Bhattacharya, chairperson and CEO of Salesforce India, talks about the growth in the India business, the SMB opportunity and drivers for cloud solutions. Edited excerpts:

How has the India business grown and what are your hiring plans?

We have doubled our headcount in the last 18 months in India alone. When I came in, the headcount was around 2,500 and its around 5,000 now. We plan to exit the fiscal year at about 6,500 people (Salesforces fiscal year ends on 31 January). So, we are doing a lot of recruitments currently. We may not double exactly, but then our plans are pretty large. And we will definitely be growing quite fast, even the next year. Our people are not just doing sales and distribution, but there is also a very large team that supports our global operations. Like most other large multinationals, we have global innovation centres in India comprising engineering, R&D, support and all of the services. We are the largest centre for Salesforce outside of the US.

For which skills and roles are you hiring?

Almost 90% of the people we hire are for technology roles and they are basically engineers. There are, of course, people in other areas like sales and finance, but most of the roles are very technology-oriented. We look at people who are Salesforce certified. Salesforce has a very nice gamified platform that is already available in the public domain and is free as well. Its called Trailhead, and you can actually get on to Trailhead and certify yourself. But even if someone is not Salesforce certified, it does not prevent them from coming in as long as they can do Java programming and things like that. We are also recruiting in the area of HR because if you are doing so much of recruitment, you also need recruiters and employee success business partners. The roles will be across all areas of a rapidly growing organization ranging from general administration to technology to sales.

What is the opportunity from SMBs in India?

SMB is probably the largest opportunity for Salesforce. When Salesforce was initially set up 23 years back, it started with the SMB segment. The enterprise segment only came about 10 years later. The idea was to solve the SMB issues. For instance, in India, most SMBs are not capital-rich, so they do not want to lock their capital to get the best of the systems. We offer subscription-based solutions so that they are not required to set up their own hardware. And its a monthly subscription, which also ensures that we stay on our toes to give them the best service possible in order to ensure that those subscriptions get renewed year after year. Three quarters of our customer base constitutes SMBs.

A recent Salesforce-IDC report stated that cloud-related technologies will account for 27% of all the IT spending in India. Whats driving the demand for cloud?

India is a capital-poor country and on-premise systems can be very costly because you are not only having to put down your servers in one place but you will actually be needing them in three pieces because you need the business continuity plan at a near site as well as disaster recovery at a far site. So, getting the servers itself is a long process. Whereas, if you look at cloud applications and you do the same job with a cloud service provider, the cloud service provider can provision you in a matter of hours. What could have taken days and a lot of money can actually be got done in a matter of hours. Its not only a question of convenience, but also about cost as with cloud, you are paying as you go.

When you are trying to be aligned to your customer, you need a number of analytics and artificial intelligence tools. The larger the data set, the better will be the outcome. To have such kind of elasticity in an on-premise system would be very costly.

Subscribe to Mint Newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Never miss a story! Stay connected and informed with Mint. Download our App Now!!

More here:
Salesforce to ramp up hiring by 1,500 by end of this year - Mint

Read More..