Page 4,047«..1020..4,0464,0474,0484,049..4,0604,070..»

5 Top Features of the Cloud that can Benefit Your Small Business – Atlanta Small Business Network

Cloud computing continues to evolve and offers many features for small businesses. Cloud technology is a great way to improve productivity in the workplace while also enhancing cybersecurity. You can choose from a wide range of companies that specialize in cloud technology, such as Amazon Web Services, Acquia, DropBox, IBM Cloud, and Salesforce. Cloud computing will continue to make a big impact in todays work environment, and now is the best time to take advantage of this innovative technology.

Here are the top five features available for companies that use cloud technology.

One of the top advantages of cloud technology is that it enables your employees to store large files on an off-site cloud server. Storing files on the cloud save space on your hard drive while also protecting your business against data loss. You will also have the peace of mind to know that all of this information is protected through encryption and multi-factor authentication for added security.

Email plays a vital role for all businesses. Cloud-based email offers many benefits compared to a traditional email system. These benefits include greater flexibility, enhance security, remote access, and less downtime. All of your emails are stored on the cloud, which makes it easy for employees to access these messages at any time.

Another key aspect of cloud computing is that it makes it easy for employees to work together on projects from multiple locations. You can edit documents in real-time, whether you are in the office or if you are working at home. Ultimately, this increases productivity and makes it much easier for employees to work at any location with access to the internet.

The loss of critical data can result in significant costs and damage the reputation of your business. One of the best ways to protect against this doomsday scenario is to use cloud technology for data backup and recovery. You can automatically schedule your data to be uploaded on the cloud for an added layer of security. These data backup and recovery services will protect your business against ransomware attacks, hardware failure, natural disasters, and accidental deletion of data.

Maintaining your servers for your business is an expensive process that requires significant storage space and IT expertise. However, you can reduce these costs by using server hosting on the cloud. The cloud service provider will handle all of the technical aspects of server hosting while allowing your business to focus on your core goals. You will only have to pay a fixed cost each month without the additional costs of in-house server maintenance and upkeep.

Cloud computing offers numerous benefits and is an excellent option for small business owners. Cloud technology will continue to evolve, and staying up to date with the latest features in the cloud is a top priority for companies to remain competitive. Begin using these features today and experience the many benefits of cloud technology.

The Atlanta Small Business Network, from start-up to success, we are your go-to resource for small business news, information, resources.

Follow us on Facebookhereand stay up to date or catch-up on all our podcasts on demand.

While youre here, dont forget to subscribe to our email newsletterfor all the latest business news know-how from Atlanta Small Business Network.

This has been a JBF Business Media production.

See the rest here:
5 Top Features of the Cloud that can Benefit Your Small Business - Atlanta Small Business Network

Read More..

Once-Reticent Utilities Accelerating Move to the Cloud – Transmission & Distribution World

The job of the distribution grid planner has become more challenging in recent years. Three interrelated forces are contributing to this situation.

1. The proliferation of applications to connect distributed energy resources (DERs). From storage (behind/before the meter), rooftop solar, electric vehicle (EV) chargers, to the sensors and communication technologies that enable smart cities, we face unprecedented demands on the distribution grid. The interaction effects can be quite complex, and the feasibility of any DER must be examined carefully to maintain grid reliability. However, current approaches to feasibility studies have not scaled at the same pace as DER applications, creating a backlog that frustrates solution providers, their customers, and policymakers who wish to enable a grid that is resilient, green, accessible and cost-effective. This is a serious impediment to grid modernization and the roll-out of the Internet of Things at the city and national level.

2. Data. The fundamental inputs to DER feasibility studies include diverse sets of data such as geolocation information, asset type, circuit loads, traffic patterns, solar insulation patterns, and socio-demographic data. Some of these data are located within the governing organization, however most of these datasets are often not integrated or interoperable, either because of legacy formats and technologies, or as mandated by policy as a security measure (or some combination). The result is a fractured information landscape that requires the grid planner to access multiple databases in series. Even with fairly routine feasibility studies, the time to assemble data sets needed to begin an assessment can be prohibitive on the order of a day or more per project. Beyond this, there may be additional datasets that exist outside of the control of the grid planner that may be useful for current and future decision making. These data, mixed with distribution system operator (DSO) collected data, may offer valuable insights especially as interacting services become required, this comes with substantial risk and caveats.

3. Data security, data governance and privacy. Grid data is highly sensitive, and has been traditionally contained in specialized servers, most often on premises. Such data does not readily lend itself to comingling with outside sources, given security protocols and data policies. Furthermore, any data that could be used to infer personally identifiable information either directly or via derivative metadata, such as census data, maintenance personnel data, location or other kinds of behavioral information must be handled with appropriate privacy protocols. The general approach has been to have a bias against using additional datasets and related assessment algorithms given the high barrier to secure and safe data sharing. What this does is diminish the possibility to experiment with better assessment tools for current feasibility studies, or to look beyond such studies to develop data-driven predictions on how the grid should evolve. The issues of data governance are compounded by national and international privacy laws like the EUs GDPR, which set severe penalties on enterprises that violate personal data rights. This is fundamentally a digital rights management problem.

In response to these challenges, German utility innogy SE (which owns Westnetz, the largest DSO in Germany), created DigiKoo, an automated grid information access platform that gives DSOs and other customers the ability to easily and securely access and share data within their organizations and beyond. At the heart of DigiKoo are a set trusted intermediary web services that securely store and manage disparate data sets in a fashion that complies with national privacy and data management laws, consumer protection frameworks and enterprise policies that the respective dataset owners specify. DigiKoo does this by using Intertrust Modulus, a data rights management platform built by Intertrust Technologies Corp, innogys close partner. DigiKoo provides a trusted analytics framework on top of Modulus curated datasets that allows for private and secure computation that gives planners the ability to quickly assess future scenarios that may affect the grid.

At its core the DigiKoo platform combines data virtualization and data governance with machine learning and physics-based power flow models. In addition, it provides software developers with a rich data and technology ecosystem that enables them to develop solutions and applications without requiring accessing DSOs directly. This enable efficient and quick application development and a route to market.

Data virtualization refers to a data management process that copes with the various data formats, software and operating systems, the result of which provides an error-free, and consistent view of all underlying data. This is important when multiple legacy systems within a company need to be accessed, which are not readily compatible, or when attempting to access data across companies where such challenges could be even more pronounced.

Data governance, or governance, is the process of ensuring the right entities gain access to the right data in the right context. This is done within DigiKoo by creating and enforcing fine-grained digital rules for managing access to data based on polices that DigiKoo and its partners define or program in the system. Governance takes place in a protected processing environment that ensures that the rights management occurs in a secure and predictable way. DigiKoo implements data governance using the Intertrust Modulus platform, which provides rights management capabilities that allow fine grain data control of data items, implementation of privacy and data policy and blind-analysis of data sets. Modulus not only allows for context-specific data management, but also for multi-party data analysis in a way that protects the data in a manner consistent with the data owners wishes.

The DigiKoo analytics framework provides several standardized applications that includes a product that identifies the cost optimal installation location of private electric vehicle charging stations, using a combination of powerflow data, socioeconomic data and geo-locational data of garages. The German government has allocated US$1 billion of support to upgrade garage infrastructure and this application can help to allocate these subsidies. A solar and storage application identifies the potential for rooftop solar at the circuit-level, while providing the likelihood of growth patterns and related grid impacts based on affinity measures of distributed generation and other sources. From a data quality perspective, the DigiKoo platform can visualize on a map where existing technical documentation may be incorrect and offer how to correct such mistakes.

DigiKoo continues to experiment and build its offerings through pilot projects. However, the current results are promising: the average time to complete a feasibility study for various DERs has been reduced from an average of 10 hours to 5 minutes. As utilities around the world adopt data-driven technologies to enhance planning in a world of smart cities and progressively more connected infrastructure, tools like DigiKoo will become as essential as volt meters in grid planning; this is especially true in counties with strict privacy regimes and in places where competing utilities must collaborate to provide planners with common data-driven interfaces to do their jobs.

Go here to read the rest:
Once-Reticent Utilities Accelerating Move to the Cloud - Transmission & Distribution World

Read More..

Ampere preps an 80-core Arm processor for the cloud – Network World

Ampere Computing, the semiconductor startup led by former Intel president Renee James that designs Arm-based server processors, is preparing to launch its next-generation CPU by mid-2020.

The upcoming chip will have 80 cores, much more than the 32-core processor the company shipped last year and vastly more than x86 CPUs by Intel and AMD. Amperes design is different. Instead of multiple threads per core, each core is single threaded.

Jeff Wittich, Amperes senior vice president of products, said that was by design, to avoid some of the CPU vulnerabilities that crept into x86 chips but also to avoid the noisy neighbor problem in cloud service-provider networks.

Because of their many cores and threads, CPUs are often shared by customers of cloud providers, although you also have the option of purchasing sole access to a CPU as well for a cost. It varies from one provider to the next. More often than not your instance is sharing CPU cycles with someone else, and if their app makes a lot of hits on the CPU cache, especially the L1 cache, it can impede your performance.

We designed the product to be single-threaded and many cores to provide as much isolation as possible, with no sharing of threads, he says. We intentionally made the product single-threaded, so there is no sharing L1 cache or registers between threads.

Ampere is specifically targeting cloud providers and hyperscale data-center operators, which includes the usual suspects Google, Facebook, Amazon and second-tier cloud providers as well and companies like Twitter and Uber. That may not be a lot of vendors but they buy in the tens if not hundreds of thousands of servers every quarter.

Were taking a different approach to this in that we have a product targeted at the cloud vs a product targeted at general data centers trying to shoehorn that into every workload, he said. The services and infrastructure architecture [hyperscalers] are deploying are totally different from what people were doing 15 to 20 years ago when x86 came in to play. Things like multi-tenant, quality of service, isolation, and manageability are whats important now.

He also notes that hyperscalers have spent the last 10 years optimizing their entire software stack, with custom Linux distributions and their own hypervisor. What they havent done is optimize or customize the CPU, because they cant.

To that end, Ampere is operating like a software provider using Agile development techniques, which means an annual release of new CPUs, faster iterations than seen by Intel, AMD, and Marvell, which owns the Cavium line of Arm server processors. This means extensive simulation testing and less time updating and fixing actual silicon.

Wittich said each core has considerably more performant than the eMAG generation, but he was waiting for silicon to do actual benchmarks. Wittich declined to go into detail on the new processor, even on the product name, beyond that it would run at a TDW of 45 to 200 watts, come in single- and dual-socket designs, use PCI Express Gen 4 and eight channels of memory.

The new processor takes the company into workloads that do run in the cloud now, like database, storage, analytics, media, and machine-learning inference.

It has a few ODM wins so far, Chinas Wiwynn and Lenovo and Gigabyte as well. While the company is targeting the public cloud providers it will go after the private cloud to a certain extent if there are opportunities that make sense, as he put it.

Silicon samples will be coming back this month and sent to partners before end of year. Taiwans TSMC is making the chips using 7nm designs. Wittich said the company is targeting mid-year 2020 for high-volume production.

More:
Ampere preps an 80-core Arm processor for the cloud - Network World

Read More..

Cisco Down Two Execs As One Reportedly Jumps To John Chambers-Backed Pensando – CRN: The Biggest Tech News For Partners And The IT Channel

Cisco Systems is losing two high-ranking executives as its former CIO as well as data center sales leader prepare to leave the company.

Guillermo Diaz Jr., Cisco's current senior vice president of customer transformation since February and former chief information officer since 2015, is leaving the company in February 2020, according to an internal email viewed first by The Information. Frank Palumbo, senior vice president of global data center sales, is also leaving.

A spokesperson for Cisco confirmed the planned departures to CRN on Friday. CRN has reached out to Palumbo for comment. Diaz referred comment back to Cisco.

[Related: AWS, Cisco Easing Pain Into The Cloud' With AWS Outposts Support, SD-WAN Tie-Ins]

Palumbo, Cisco's senior vice president of global data center sales, is reportedly joining Pensando Systems, an edge computing startup founded by ex-Cisco engineers and backed by former Cisco Chairman and CEO John Chambers, The Information reported on Friday. Palumbo worked at Cisco for 27 years.

Chambers ripped the sheet off Pensando Systems, his stealth software firm in October that develops hardware and software that lets companies run their servers more efficiently, particularly in the cloud. The company, which includes several high-profile ex-Cisco engineers such as Mario Mazzola, Prem Jain, Luca Cafiero and Soni Jiandani, is taking aim at AWS with its offerings.

Diaz, who has spent nearly two decades at Cisco, has been leading the firm's Customer Transformation business since he cleared out of the CIO seat in February. Cisco brought in Jacqueline Guichelaar, formerly chief information officer at Thomson Reuters, as its CIO in February.

The departures come on the heels of Cisco revealing plans to restructure several business units, including its cloud business and enterprise and data center networking segments to better position itself against competitors, the company said in an internal email in November.

Specifically, Cisco's enterprise networking and data center networking units are being combined. Cisco is also renaming its existing cloud computing business to Cloud Strategy and Compute and expanding the segment to include server products.

The San Jose, Calif.-based networking giant also reassigned several of its department leaders as a result of the restructure. Dave Ward, Cisco's chief technology officer of engineering and chief architect, is stepping down to take a new role inside the company. Roland Acra, senior vice president and general manager of Ciscos Data Center business unit, will be his replacement. The soon-to-be combined enterprise and data center networking unit will be led by Cisco's Senior Vice President and General Manager of enterprise networking, Scott Harrell. Liz Centoni, a 19-year Cisco veteran and current senior vice president and general manager of IoT for the tech giant, will now lead Cisco's new Cloud Strategy and Compute business unit. Cloud computing's former leader, Kip Compton, is moving to Ciscos Networking and Security Business group, Cisco said.

Go here to see the original:
Cisco Down Two Execs As One Reportedly Jumps To John Chambers-Backed Pensando - CRN: The Biggest Tech News For Partners And The IT Channel

Read More..

Meet CIMON-2, a new and improved robotic AI astronaut – Astronomy Magazine

Supercomputers with artificial intelligence don't have a gleaming reputation as intergalactic travel pals you know, HAL 9000 and that old yarn. But that didnt stop space agencies from making a robot astronaut assistant anyway.

In 2018, a $6 million basketball-sized, floating computer named CIMON (Crew Interactive MObile companioN) gained fame for its interactions with Alexander Gerst, a German astronaut and geophysicist with the European Space Agency. Now, a new and improved version of the robot CIMON-2 launched into orbit on Thursday, where it will soon join the International Space Station crew and aid astronauts.

Deep-space travel will force human crew members to endure significant stress loads, and researchers with the DLR Space Administration, Germany's space agency, wanted to see if CIMON could solve a Rubiks cube, help with a few experiments and even boost crew morale. Unfortunately, CIMONs first trip proved there are still some bugs to work out.

In an early demonstration in 2018, it was CIMON not Gerst that needed a morale boost. After Gerst asked CIMON to play his favorite song, the 11-pound bot refused to let the music cease, defying Gersts commands. And, rather than acknowledging it had jumped rank, CIMON accused Gerst of being mean and finished with a guilt-trip flourish by asking Gerst, Dont you like it here with me? It wasnt quite HAL 9000 bizarre, but bizarre nonetheless.

You be the judge:

SIMON, a joint collaboration between Airbus and IBM Watson, is loaded with speech and visual recognition capabilities. And, through its connection to IBMs Earth-based cloud servers, CIMON can be trained and develop new skills and reasoning capabilities. A team of some 50 people have been working to make the bots possible since 2016.

Early glitches aside, robots like CIMON are poised to play feature roles in future missions to the moon, Mars and beyond. And CIMON-2 is picking up where its predecessor left off. The new version comes with improved orientation and is more "empathic," officials say.

"It is planned that CIMON-2 will stay on the ISS for up to three years and support the crew, says Till Eisenberg, Airbus' project manager for CIMON. CIMON-2 has more sensitive microphones and a more advanced sense of orientation. The AI capabilities and stability of the complex software applications have also been substantially improved.

Over the coming years, CIMON-2 could make work a bit more efficient on the space station, helping pass on instructions for repairs, documenting experiments and offering voice-controlled access to reference material.

CIMON-2 isn't the only bot hoping to have a long future in space. Scientists at NASAs Langley Research Center are experimenting with flexible, silicone-based robots for dangerous, dirty, or dull, jobs in space, such as forming a rudimentary shelter to protect astronauts during a martian dust storm. NASAs LEMUR robots, for example, could someday serve as lunar pack mules.

Of course, its easy to forget robots have done the bulk of space exploration up to this point they even send selfies from Mars! So, when that first person steps onto the Red Planets surface, theres a good chance theyll be sharing the limelight with a robot companion.

See the original post here:
Meet CIMON-2, a new and improved robotic AI astronaut - Astronomy Magazine

Read More..

Qualcomms Snapdragon Tech Summit focuses on 5G AI chips and use cases – VentureBeat

Qualcomm generally uses its annual Snapdragon Tech Summit in Maui to unveil next-generation mobile processors and related technologies, which it accomplished with major new Snapdragon 865/765 and 3D Sonic Max dual-fingerprint scanner reveals today. But the bigger theme of the event is demonstrating real-world use cases for the companys new 5G AI chips that will drive consumer demand for next-generation smartphones and devices.

Even though 5G data services are still in their earliest days, uptake from OEMs, carriers, and consumers has already been encouraging stronger than 4G, by all accounts. Leading the events first-day keynote, Qualcomm president Cristiano Amon noted that over 230 5G devices are already either launched or in development, a staggering number given that networks are still in the process of launching worldwide. Nicki Palmer, chief product development officer for top U.S. carrier Verizon, said that the company is already selling seven 5G devices, whereas at the same time in 4Gs lifecycle, it offered only one.

Early real-world 5G use cases include considerably higher-resolution and faster streaming videos, cloud gaming, and facilitating next-generation user-created content, such as enhancing the presence of 3D imagery and high resolution 360-degree videos. Qualcomm also expects 5G will have a growing role in mixed reality experiences, including enabling virtual presence for teleconferencing and facilitating responsive augmented reality.

Going forward, Amon suggested that the separate domains of 5G and AI will be linked, noting that 5G cellular technologies will enable massive, low-latency data transfers between mobile devices and nearby edge cloud servers quantities and types of data that will require AI as a processing intermediary. Amonpredicted that 5G and AI will soon lead to a complete convergence of on-device apps with cloud services, effectively eliminating the present gap between a mobile devices own hardware and the computing resources it can marshal.

5G will bring to you a reliable, always-on connection to the cloud, he said, which can be trusted to expand persistently connected devices with virtually unlimited storage and on-demand cloud services. Rather than relying on the devices to supply the processing horsepower for apps, theyll draw significantly and seamlessly on cloud services distributed intelligence bringing about an age of intelligent cloud connectivity.

So far, the company is bullish on its 5G prospects, expecting to see 1.4 billion 5G smartphone shipments by 2022. Of course, the legwork to get devices into the marketplace is being undertaken by OEMs such as Motorola, Oppo, Xiaomi, and HMD/Nokia, each of which committed during the event to offering devices based on the new Snapdragon processors in 2020. While none of the companies offered specifics on its new devices, Xiaomi alone promised over 10 new 5G smartphones in the coming year. Qualcomm says that it has developed two- and three-chip modular platforms for its new Snapdragon offerings, and will offer OEMs similar options to spark the development of Snapdragon-based wearables and IoT devices.

Carriers will also be a major part of the picture, and uptake is positive so far. Amon said that the company expects 200 million 5G subscribers by the end of next year, and 2.8 billion 5G connections by 2025, aided by large numbers of network launches across the globe over the next two years. Without naming T-Mobile specifically, Amon embraced that companys forward-thinking strategy of combining low, medium, and high frequency spectrums to create blanket, true 5G, and performance 5G service layers, noting that carriers will use dynamic spectrum sharing to transition their networks from 4G to 5G while continuing to serve both types of users.

Verizons Palmer also highlighted the 5G R&D work currently being undertaken at its five separate 5G Labs, which she noted was focused on developing use cases and bolstering hundreds of companies with creative 5G ideas.She also claimed a new worlds first, saying that Verizon has just become the first carrier to offer 5G on a beach, regrettably underscoring the companys odd 5G launch strategy of covering only small segments of select cities with short-distance millimeter wave 5G towers. The company has promised to offer 5G data service in parts of 30 markets by the end of 2019, up from fewer than 20 plus some NFL stadiums today.

See the original post here:
Qualcomms Snapdragon Tech Summit focuses on 5G AI chips and use cases - VentureBeat

Read More..

Are All Russian Mobile Apps Really a Potential Counterintelligence Threat? This is What the Experts Say – Newsweek

The FBI says any mobile apps made in Russia are considered a "potential counterintelligence threat." Cybersecurity experts say the situation is complicated.

On Monday, New York Sen. Chuck Schumer released a response he received from the federal agency last month referencing the suspected dangers of Russia-based FaceApp, which went viral earlier this year. The app lets users upload their selfie photos and apply an aging filter.

Since it surfaced back in 2017, many have questioned how data and photos sent to FaceApp are stored by the company. Now we know the FBI's stance.

"The FBI considers any mobile application or similar product developed in Russia, such as FaceApp, to be a potential counterintelligence threat, based on the data the product collects, its privacy and terms of use policies and the legal mechanisms available to the government of Russia that permit access to data within Russia's borders," the federal agency said.

Read more

It's not clear what other Russian mobile applications the FBI considers to be potential threats, as none are mentioned by name in its statement to the senator, but the letter certainly indicates that the default position is one of suspicion.

The FBI noted Russia's intelligence services have "robust cyber exploitation capabilities" that can be used to obtain data directly from internet service providers (ISP). The inference is clear: that U.S. user data sent to the firm's St. Petersburg operation could easily be scooped up.

In July, TechCrunch reported the company's research and development team is in Russia, but FaceApp bosses stress a lot of the data is actually stored by Google and Amazon.

The FBI declined to comment.

Broadly, cybersecurity experts agreed that data in Russia would be at risk to exploitation by the same state which spearheaded the operation to meddle in the 2016 presidential election. But they stressed the reality is complex, and were reluctant to paint every Russian app as being nefarious.

"It feels like a stretch to me to say any app developed in Russia is a counterintelligence threat," Robert Pritchard, a former cybersecurity advisor to the U.K. government, told Newsweek. "Sure there may be privacy risks, and you certainly wouldn't want U.S. government employees using it, but beyond that I'm not really sure how it qualifies.

"I don't think the FBI would be political, but I suspect it's something of a broad brush response," Pritchard continued. "Russian laws mean they can have access to anything, 'we don't trust the agencies, ergo security threat.' I'm not disputing the FBI's distrust of the application. I wouldn't use it, but I don't really see how that reaches the threshold of counterintelligence threat."

In its letter to Sen. Schumer, the FBI cited concerns about FaceApp's access to device cookies, log files, and metadata. Previously, in July, speculation suggested the app could be used to train facial recognition softwarea claim denied by the company, the BBC reported.

The FBI noted in its letter to the senator, first reported by Axios, that FaceApp claims to upload its users' selfie photos to servers in the U.S., Singapore, Ireland and Australia.

"I'm not sure if the FBI [is most concerned about] Russia-made apps or Russian servers holding data. There's a difference," Lukas Stefanko, a malware researcher at ESET, told Newsweek.

"If the Russian government can snoop on any server in Russia, that might be concerning, but it is not fair to say that all Russian-made apps are a security risk, especially without any proof."

According to analytics company SensorTower, some of the most popular mobile apps by Russian publishers in Q1 2019 included Homescapes (Playrix), Vegas Crime Simulator (Naksiks, OOO), and social media platform VK (VKontakte). Each application boasts millions of downloads.

SensorTower data, updated as recently as last month, suggests FaceApp has now amassed up to 3 million downloads on Google Android devices and 400,000 more via Apple iOS.

FaceApp previously said user data is not transferred to Russia.

Its founder, Yaroslav Goncharov, told Newsweek the app relies on third-party cloud providers, including the aforementioned Google and Amazon, to process photos due to limited processing resources on most smartphones. The software's privacy policy was updated December 2.

He said: "For Amazon Web Services we specify the U.S. as the data storage location, for Google Cloud Platform, we specify data storage at a location closest to you when you use the app.

"The app only uploads to the cloud the photographs that users specifically selected for editing. Photos are temporarily cached on the cloud servers during the editing process and encrypted using a key stored locally on the user's device," Goncharov continued.

"Photographs remain in the cloud for a limited period of 24-48 hours after users have last edited the photograph, and are then deleted along with editing data associated with the photograph."

Read more

He said that work to revise the app's privacy policy started several months ago.

Armando Orozco, a senior malware intelligence analyst at cybersecurity and anti-virus company Malwarebytes, said the FBI's blanket anti-Russian app policy had overtones of "political posturing."

"After the chaos in the 2016 presidential election... the message seems to be everything Russian should be off limits because they cannot be trusted in 2020," Orozco told Newsweek.

"With Putin signing legislation requiring all smartphones and computers to come pre-installed with Russian apps, they might be trying to get ahead of the storm. There could potentially be more Russian-made apps and devices entering the market. There is no evidence that FaceApp is a Russian spy app, but it became popular very fast, even among celebrities and politicians.

"Which also could explain the 'be careful' messaging. Reading through some of the app reviews, right now I think this app is after people's wallets rather than their selfies. The message should be: be careful of all the apps you use, whether it be made in the U.S., Russia, anywhere," Orozco said.

View post:
Are All Russian Mobile Apps Really a Potential Counterintelligence Threat? This is What the Experts Say - Newsweek

Read More..

Overcome these VMware Cloud on AWS migration challenges – TechTarget

It is easy to be enticed by the features and flexibility of VMware Cloud on AWS, but it is also important to consider the migration challenges that come with a move from an on-premises data center to AWS.

VMware Cloud on AWS is a hybrid cloud service that can extend an enterprise's on-premises VMware environment to Amazon's public cloud. It was developed jointly by AWS and VMware, but VMware sells and operates the service. While a number of tools and scripts exist to help you move workloads to this service, a VMware-to-AWS migration is no simple task. Let's go through a few key VMware Cloud on AWS migration challenges and how you can work through them.

Before you start a VMware Cloud on AWS migration, understand that you won't accumulate cost savings overnight. Moving infrastructure and applications requires careful planning, so organizations should prepare for the costs and complexity accordingly. You'll also have to spend to keep dual environments running and staff working throughout the process.

To manage these costs, undertake traditional maintenance, such as cleaning up servers and handling OS and application patching, before a single workload moves offsite. Make a checklist and physically check off each server as it goes through the optimization process. This should also include rightsizing of both CPU and memory resources. While VMware Cloud on AWS resources can scale almost indefinitely, your budget cannot.

Network configurations are one of the first AWS migration challenges an enterprise will face, regardless of how deeply embedded a workload is in its existing architecture. IT teams should consider VMware Hybrid Cloud Extension (HCX) as a means to simplify this process.

HCX is a network migration toolset in VMware Cloud that starts with WAN optimization, compression and deduplication. Important for VM migration, HCX can stretch your Layer 2 networks for easier VM relocation. HCX also provides a few critical tools to save time and money. For example, HCX won't make you re-platform or change workload IP addresses -- a huge relief for application teams.

Bypassing the normal 250 Mbps bandwidth for live vMotion of a virtual machine -- to migration speeds closer to 100 Mbps -- helps users with slower internet connections. VMware HCX requires some additional network configuration steps to transition from one environment to another and accommodate the seamless transition, but the WAN optimization, Layer 2 stretching and other capabilities are huge advantages that can make up for those efforts.

Once your network framework is set, you'll need decide whether to do a forklift migration or a rebuild in the cloud. Both migration approaches have advantages and challenges in terms of cost, complexity and turnaround time.

If you choose to rebuild, you can keep your existing applications up and running while you replicate them in the cloud. A rebuild likely produces a smaller footprint in your cloud because it's optimized for that environment. You can remove legacy installations, update patching and have a fresh start in usage for a more stable, better performing platform. However, rebuilding takes a lot of time and work to complete. So, while the end result might be cleaner, the effort and costs involved could be prohibitive.

This leaves you with a forklift -- or lift and shift -- migration. Now, it could be a live vMotion migration or it could be a cold migration with minimal outages. The key here is you're not making changes to the workloads themselves. But, moving them as is into VMware Cloud on AWS would be a huge mistake.

Even though you have bought a cloud environment with multiple hosts and have all these resources dedicated to it, you still need to clean up and optimize your workloads before moving them into VMware Cloud on AWS, as you would any cloud platform. This includes the cleanup of server C drives of temporary files and systems from patch installations, along with runaway server profiles and downloads folders. If workloads are not cleaned up, you can easily double a server's storage footprint when it moves to the cloud.

In addition, you will still have to get the workloads into your VMware Cloud on AWS environment. Even VMware's HCX WAN optimization technology can't fix bad storage habits such as keeping installation ISO files and downloads on server desktops.

Regardless of the migration model you choose, incorrect sizing issues and other bad habits won't get magically fixed when you move workloads to VMware Cloud on AWS. Misconfigurations and excess files will ultimately cost more money if you don't address them upfront. You need to have as clean an environment as possible if you want a successful VMware Cloud on AWS migration.

Go here to read the rest:
Overcome these VMware Cloud on AWS migration challenges - TechTarget

Read More..

Aptiv To Unveil Open Source Advanced Technology Architecture – Forbes

This is an autonomous BMW, front, under development by Aptiv in Pittsburgh on display in Pittsburgh ... [+] Monday, March 4, 2019. (AP Photo/Gene J. Puskar)

A box to control blind spot detection. A box to control lane change assist, still another to control adaptive cruise control along with a host of other boxes to control the various infotainment, safety and power systems. Whats in the box goes down, and so does the system it controls, often with no backup. But automotive supplier Aptiv believes its come up with a way to not only reduce the number of boxes while providing redundancies without raising costsa major consideration for automated vehicles dependent on electronic sensors and controls.

Infographic explaining Aptiv's Smart Vehicle Architecture.

Its called Smart Vehicle Architecture, or SVA, which Aptiv plans to unveil at next months CES 2020 in Las Vegas. Currently, the software and hardware for single function are installed in a single, dedicated box. With SVA, that connection is eliminated and the number of individual boxes reduced,through the use of cloud servers, and compute or hardware, is shared among functions creating flexibility, redundancy and reduced costs. Software can be kept current with over the air updates.

Glen De Vos, Aptiv Senior Vice President and Chief Technical Officer, explaining the company's Smart ... [+] Vehicle Architecture during a media briefing at Aptiv North American Headquarters, Troy, Mich., December 2, 2019.

Every feature in a vehicle has a box that has dedicated compute to it. Each of those has dedicated compute hardware that does not share across any other, which is incredibly inefficient, explained Glen De Vos, Aptiv Senior Vice President and Chief Technology Officer during a media briefing on Monday. As a result, you want this compute to be able to share tasks across the various domains.

The main issue SVA addresses is the increasing amount of software and data required of advance technology vehicles with even limited driver assist features and ranging up to SAE Level 4 and 5 autonomy where the driver has little to no role.

Thinking about architecture of a vehicle, the combination of hardware and software, what were seeing now is that growth of software content and associated processing and compute is really breaking the current vehicle architecture, said De Vos.

He predicted SAE Level 2 or Level 2+ systems which assume some driver functions will be the industry baseline by 2025 and are fairly affordable for consumers. But the inflection point where costs rise dramatically for manufacturers and consumers will be Level 3 vehicles where the car is basically in control. Those costs then rise even more sharply for Level 4 and 5 vehicles. De Vos contends Aptivs Smart Vehicle Architecture will go a long way toward controlling those costs, saying, Were convinced this is where the vehicle has to golowering the cost of this technology.You have to take cost out of the existing system to be able to put that in at a price point consumers can actually afford or are willing to pay for.

Euisun Chung, Executive Vice Chairman, Hyundai Motor Group and Kevin Clark, President and Chief ... [+] Executive Officer, Aptiv at Goldman Sachs headquarters in New York City, Sept. 23, 2019.

In September, Aptiv and Hyundai Motor Group announced a joint venture to develop Level 4 and 5 driving systems for autonomous vehicles that would be available for use in robotaxi providers, fleet operators, and automotive manufacturers. Initial deployment of driverless vehicles will come in 2020, then scale up to limited deployment inside an existing network in 2022 and increasing the number of vehicles in 2025. The vehicles would operate inside a geofenced area meaning it would be electronically prevented from straying that area.

The Aptiv-Hyundai joint venture is not exclusive, meaning the joint venture canwork with any Tier-1 supplier or OEM, and that each companycan leverage the technology, cost benefits and research derived from the joint venture.

De Vos admits, trying to sell an open source system to automakers with a tradition of keeping advances inside their own houses may be challenging, but is hoping once theyre exposed to Aptivs Smart Vehicle Architecture, theyll think its a pretty smart idea.

See the article here:
Aptiv To Unveil Open Source Advanced Technology Architecture - Forbes

Read More..

Bitcoin Ichimoku Cloud Analysis BTC Price Prediction When To Buy Bitcoin? – Nasdaq

Bitcoin Ichimoku Analysis: When to buy Bitcoin? Today Im using the Ichimoku cloud analysis for BTC price Prediction to find out if NOW is a good time to buy Bitcoin. That is if you believe Bitcoin is the right asset to invest in and suits your unique risk tolerance.

Once youve watched the video, let me know what you about the Bitcoin price forecast. Are you investing in Bitcoin? Is now a good time to buy Bitcoin? Or would you rather wait or the BTC price to drop further?

I look at the markets based on my signatureInvest Diva Diamond Analysis (IDDA)and combine it with the Ichimoku Kinko Hyo strategy development technique including indications from theIchimoku cloud. The IDDA looks at investment strategies from 5 points: technicals, fundamentals, sentiment, capital, and overall.

Created by a pseudonymous developer (or developers) using the pseudonym Satoshi Nakamoto, Bitcoin gave birth to the multibillion-dollar cryptocurrency industry. The blockchain technology supports Bitcoin and the decentralization movement. Although many projects have tried to replicate Bitcoins success by creating coins that are far more scalable and advanced on the technological level, Bitcoin has remained the most dominant cryptocurrency on the market. Its main use-case has been shifting, though. People started to consider it as a store of value (digital gold) rather than a payment currency. And that is primarily due to its high domination, large liquidity, and generally lower volatility in relation to other cryptocurrencies.

There is still an ongoing debate in the space about whether Bitcoin should be considered a medium of exchange or a store of value. In its whitepaper Bitcoin was designed as a peer-to-peer version of electronic cash. The global cash market accounts for about $36.8 trillion of physical money and could increase to $90.4 trillion if we also include money held in easily accessible accounts. BTC is the native currency of Bitcoin the blockchain. Its main application within the ecosystem is to serve as a reward instrument for miners who are adding blocks to the blockchain. Miners are paid through a combination of Bitcoins block reward and transaction fees.

According to the Simetri research, as the number one cryptocurrency, Bitcoin continues to be the driving force behind the crypto economy. Its influence over the overall market is undeniable, while its prospects for development look promising. Its ecosystem is evolving, with many projects that enhance its functionality as well as scalability.

The Lightning Network alone has seen significant growth since the last year, in terms of both network capacity and merchant adoption. The interest from institutional investors also has been significant. The appearance of necessary infrastructure that will allow investment in Bitcoin through regulated and well-structured investment vehicles, will attract even more smart money into the crypto economy. Therefore it seems likely that Bitcoin can become a store of value asset, and attract institutional money into its ecosystem.

This will make bitcoin even more dominant in the crypto economy and will support and stabilize its value proposition. Due to its strong market position and overall ecosystem expansion, Bitcoin receives an A- from Semitri research:

BitcoinIchimoku cloudAnalysis indicates mixed signals across different time frames in terms of BTC price prediction. On one hand, on the daily chart, the BTC/USD pair has crossed below the Ichimoku cloud and testing to break below the 61% Fibonacci retracement level of 7,362. A break below this level could open doors for further drops towards the key support level of $6,000.

On the other hand, on the monthly chart, the pair is being supported by the monthly Ichimoku cloud. we wont know until January 2020 if this support is going to hold the pair from further drops.

So when is a good time to buy Bitcoin?

Watch the video to find out, and let me know what you think.

Disclaimer Trading in the financial markets involves a risk of loss and you should only trade the money you can afford to lose.

This article was originally published on InvestDiva.com.

The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.

Here is the original post:
Bitcoin Ichimoku Cloud Analysis BTC Price Prediction When To Buy Bitcoin? - Nasdaq

Read More..