Page 2,554«..1020..2,5532,5542,5552,556..2,5602,570..»

Announcing the ORBIT dataset: Advancing real-world few-shot learning using teachable object recognition – Microsoft

Object recognition systems have made spectacular advances in recent years, but they rely on training datasets with thousands of high-quality, labelled examples per object category. Learning new objects from only a few examples could open the door to many new applications. For example, robotics manufacturing requires a system to quickly learn new parts, while assistive technologies need to be adapted to the unique needs and abilities of every individual.

Few-shot learning aims to reduce these demands by training models that canrecognizecompletely novel objects from only a fewexamples, say 1 to 10.In particular,meta-learning algorithmswhichlearn to learnusing episodic trainingareapromisingapproachto significantly reduce the number of training examplesneeded totrain a model.However, most research infew-shot learning has been driven bybenchmark datasets that lack the high variationthat applications face when deployed in therealworld.

In partnership with City, University of London, we introduce the ORBIT dataset and few-shot benchmark for learning new objects from only a few, high-variation examples to close this gap. The dataset and benchmark set a new standard for evaluating machine learning models in few-shot, high-variation learning scenarios, which will help to train models for higher performance in real-world scenarios. This work is done in collaboration with a multi-disciplinary team, including Simone Stumpf, Lida Theodorou, and Matthew Tobias Harris from City, University of London and Luisa Zintgraf from University of Oxford. The work was funded by Microsoft AI for Accessibility. You can read more about the ORBIT research project and its goal to make AI more inclusive of people with disabilities in this AI Blog post.

You can learn more aboutthe workin our research papers:ORBIT: A Real-World Few-Shot Dataset for Teachable Object Recognition,published atthe International Conference of Computer Vision (ICCV2021),andDisability-first Dataset Creation: Lessons from Constructing a Dataset for Teachable Object Recognition with Blind and Low Vision Data Collectors, published at the 23rd International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2021).

Youre also invited to join Senior Researcher DanielaMassicetifor a talk about the ORBIT benchmark dataset and harnessing few-shot learning for teachable AI at the firstMicrosoft Research Summit.Massicetiwill be presenting Bucket of me: Using few-shot learning to realize teachable AI systems as part of the Responsible AI track on October 19. To view the presentation on demand, register at the Research Summit event page.

The ORBIT benchmark dataset contains 3,822 videos of 486 objects recorded by 77 people who are blind or low vision using their mobile phonesa total of 2,687,934 frames. Code for loading the dataset, computing benchmark metrics, and running baselines is available at the ORBIT dataset GitHub page.

The ORBIT dataset and benchmark are inspired by a real-world application for the blind and low-vision community: teachable object recognizers. These allow a person to teach a system to recognize objects that may be important for them by capturing just a few short videos of those objects. These videos are then used to train an object recognizer that is personalized. This would allow a person who is blind to teach the object recognizer their house keys or favorite shirt, and then recognize them with a phone. Such objects cannot be identified by typical object recognizers as they are not included in common object recognition training datasets.

Teachable object recognition is an excellent example of a few-shot, high-variation scenario. Its few-shot because people can only capture a handful of short videos recorded to teach a new object. Most current machine learning models for object recognition require thousands of images to train. Its not feasible to have people submit videos at that scale, which is why few-shot learning is so important when people are teaching object recognizers from their own videos. Its high-variation because each person has only a few objects, and the videos they capture of these objects will vary in quality, blur, centrality of object, and other factors as shown in Figure 2.

While datasets are fundamental for driving innovation in machine learning, good metrics are just as important in helping researchers evaluate their work in realistic settings. Grounded in this challenging, real-world scenario, we propose a benchmark on the ORBIT dataset. Unlike typical computer vision benchmarks, performance on the teachable object recognition benchmark is measured based on input from each user.

This means that the trained machine learning model is given just the objects and associated videos for a single user, and it is evaluated by how well it can recognize that users objects. This process is done for each user in a set of test users. The result is a suite of metrics that more closely captures how well a teachable object recognizer would work for a single user in the real world.

Evaluations on highly cited few-shot learning models show that there is significant scope for innovation in high-variation, few-shot learning. Despite saturation of model performance on existing few-shot benchmarks, few-shot models only achieve 50-55% accuracy on the teachable object recognition benchmark. Moreover, there is a high variance between users. These results illustrate the need to make algorithms more robust to high-variation (or noisy) data.

Creating teachable object recognizers presents challenges for machine learning beyond object recognition. One example of a challenge posed by a human-centric task formulation is the need for the model to provide feedback to users about the data they provided when training in a new personal object. Is it enough data? Is it good-quality data? Uncertainty quantification is an area of machine learning that can contribute to solving this challenge.

Moreover, the challenges in building teachable object recognition systems go beyond machine learning algorithmic improvements, making it an area ripe for multi-disciplinary teams. Designing the feedback of the model to help users become better teachers requires a great deal of subtlety in user interaction. Supporting the adaptation of models to run on resource-constrained devices such as mobile phones is also a significant engineering task.

In summary, the ORBIT dataset and benchmark provide a rich playground to drive research in approaches that are more robust to few-shot, high-variation conditions, a step beyond existing curated vision datasets and benchmarks. In addition to the ORBIT benchmark, the dataset can be used to explore a wide set of other real-world recognition tasks. We hope that these contributions will not only have real-world impact by shaping the next generation of recognition tools for the blind and low-vision community, but also improve the robustness of computer vision systems across a broad range of other applications.

The rest is here:
Announcing the ORBIT dataset: Advancing real-world few-shot learning using teachable object recognition - Microsoft

Read More..

Stop Bashing ML Hackathons Already, Because They Are Not Close To Real-World – Analytics India Magazine

For years, people have been comparing machine learning and data science hackathons with real-world implications. Yet, ironically, the debates are never-ending and often ambiguous.

For instance, if you look at online hackathon platforms like Kaggle or MachineHack. These platforms allow users to find and publish data sets, explore and build models in a web-based data-science environment, collaborate/work with other data scientists and machine learning engineers, and enter the competition to solve data science and machine learning challenges across experience levels beginners to intermediate and expert.

Hackathon platforms have been serving as a test-bed for data scientists and machine learning professionals. As per Kaggle, more than 55 per cent of data scientists have less than three years of experience, and six per cent of them pursuing data science have been using machine learning for more than a decade.

There are a lot more gains than losses by participating in hackathons. Some of the benefits/advantages include:

In this article, we will talk about the differences between hackathon platforms and real-world machine learning projects and draw a clear conclusion between the both.

Before we delve deep into understanding the difference between hackathons and real-world machine learning projects, lets look into a lifecycle of a machine learning project. As explained by Steve Nouri, founder, AI4Diversity, it typically involves:

Many industry experts believe that the hackathon platforms might be an amazing way to experiment and learn. Still, it only aligns with a single stage of the ML lifecycle i.e., training the model. However, when a data scientist builds a model in the real world and optimises the metric, they need to consider the RoI, inference, re-training cost and costs in general. That is a completely missing puzzle while working on hackathon platforms.

To drive the adoption of an ML model within the business stakeholders, it is important we think about interpretability as well, said Sushanth Dasari, data scientist at Trust, stating that it drives a lot of key decisions in each of the steps in the life cycle, which is never the case with a hackathon.

In real-world ML projects, 90 per cent of the time is spent on acquiring, cleaning and processing the data, often querying different databases and merging this data. The quality of the input data needs to be carefully assessed and checked for correctness, integrity, and consistency, said Daniele Gadler, data scientist at ONE LOGIC GmbH.

Further, he said once the Ml model had been developed and deployed, a lot of time goes into monitoring, re-training the model and re-training it based on newly ingested data (MLOps). Instead, in hackathons, the data is already provided and is generally cleaner than in real-world projects. Furthermore, there are no concerns about real-world issues such as model stability, maintainability, deployability, etc. You can just focus on developing a super-complex unmaintainable huge model with the goal of obtaining the best performance on the data provided for the competition, hoping it will generalise on newly unseen data, said Gadler.

Joseph Wehbe, co-founder and CEO of DAIMLAS.com, said that time is wasted improving 0.000001 accuracies on hackathon platforms, but you do not do that in the real world. It focuses only on one performance metric. However, in the real world, you focus on scalability, speed, deployment, and cost. You dont learn how to clean raw data. You dont learn understanding the business problem, deployment skills, team skills interacting with leadership, and analysis to understand what business problem you are trying to solve, he added.

While hackathon platforms like Kaggle, MachineHack, etc., push users to explore new problems, it also helps them understand the science part well enough to do real-world work.

Hackathon platforms can be as real as real-world, but only the environments are different. For example, what a gym is for athletes, hackathon platforms are for data scientists and machine learning professionals, a great place to practice and learn.

Amit Raja Naik is a senior writer at Analytics India Magazine, where he dives deep into the latest technology innovations. He is also a professional bass player.

Read the original here:
Stop Bashing ML Hackathons Already, Because They Are Not Close To Real-World - Analytics India Magazine

Read More..

The Cloud Software Industry Is at a Moment of Reckoning – Business Insider

In September, Box CEO Aaron Levie just barely won a tough board fight against the activist investor Starboard, which sought to replace several members of the cloud storage company's board with its own slate. Its chief complaint: Box's annual growth rate of 11% on $770.8 million in 2021 just isn't enough.

Not quite as dramatically but no less significantly companies like Zoom , Autodesk, and DocuSign also recently suffered the consequences of disappointing Wall Street. Each gave investors lowered guidance reflecting the reality that the slow return to the office meant the pandemic-driven boom in their respective businesses was likely over. Each saw their stock take a major dip.

Industry insiders see it all as a sign that even the hottest cloud software companies are suffering from a case of heightened expectations: After almost every major cloud company saw huge growth, much of Wall Street now seems to believe that even good, dependable revenue expansion isn't good enough.

"The cloud ecosystem has gotten a big tailwind from decentralized workforces and the work-from-home mandates," said Byron Deeter, a partner at Bessemer Venture Partners and a longtime investor in the cloud space. "And so what used to be considered acceptable growth is now being pushed out. And I think that is adding pressure on some of the slower-growing public cloud companies to perform even more."

Meanwhile, the titans of the cloud industry Microsoft, Amazon Web Services, and Salesforce have only gotten stronger over the past two years and show no signs of giving up ground in the market. That's made it increasingly hard for smaller software companies to compete.

This dynamic is pushing software companies like Zoom and others to find new ways to grow as the world returns to normal. Analysts say to expect further consolidation in the market as software companies that didn't grow massively during the pandemic or are showing signs of slowing down look for new options.

Zoom, for example, is focusing on its cloud phone business and telephony as what could be its next big market. (Its aborted $14.7 billion deal to buy Five9 would have marked a big push into the contact-center business.) Box launched an e-signature product to compete with DocuSign, while the 32-year-old firm Citrix bought the task-management startup Wrike in a productivity push.

The stakes are high for those companies, analysts say, with growth often depending on how well it navigates expansion beyond the core business. Those that can't nail it often face the end of their existence as an independent company.

"Best-of-breed companies either become multiproduct companies, and they can continue their organic or inorganic kind of standalone growth, or the thing that they do, they tap out, and they don't really know how to do the other stuff, and so then they get acquired," said Alex Zukin, an analyst at Wolfe Research.

While software is more important than ever to keep a business running, companies are looking to reduce the amount of software they buy, RBC analysts said in a recent note to clients. That, in turn, is leading them to spend with larger platforms, like Salesforce, Microsoft, or AWS, that bundle many products into subscription suites, the analysts wrote.

That puts even more pressure on independent software companies. Slack , the workplace chat app, spent the early days of the pandemic facing investor scrutiny about its ability to compete with Microsoft Teams in a remote-first world. Ultimately, Salesforce acquired Slack for $27.7 billion.

Experts say that getting snapped up isn't always a bad thing; it helps the larger players "inject modernity" into their business, Zukin said, while giving the acquired company access to the sales and marketing resources they need to stand a better chance of competing.

Ultimately, however, the experts agreed that in this environment of heightened expectations, the only way for a software company to ensure survival or at least independence is to become "mission critical," literally irreplaceable to customers, including by expanding their product lineup.

"So it is increasingly important that software companies have the ability to become a larger platform," RBC analysts said in a note earlier this year. "Otherwise they risk being unable to gain meaningful traction beyond a certain threshold."

Do you have insight to share? Contact this reporter via email at pzaveri@insider.com or Signal at 925-364-4258. (PR pitches by email only, please.)

Excerpt from:
The Cloud Software Industry Is at a Moment of Reckoning - Business Insider

Read More..

Cloudflare Challenges AWS with R2 Storage and No Egress Fees – InfoQ.com

Cloudflare has recently announced R2 storage, a S3-compatible service to store large amounts of data with no egress bandwidth fees associated. An automatic migration of objects from Amazon S3 to Cloudflare R2 will be offered to facilitate the transition or integration for existing AWS deployments.

Cloudflare claims that they will eliminate egress fees, deliver an object storage that is at least 10% cheaper than S3, and make infrequent access free. In the announcement, Matthew Prince, co-founder and CEO of Cloudflare, explains:

Since AWS launched S3, cloud storage has attracted, and then locked in, developers with exorbitant egress fees. (...) Our aim is to make R2 Storage the least expensive, most reliable option for storing data, with no egress charges.

Automatic migration from S3 to R2. Source: https://blog.cloudflare.com/introducing-r2-object-storage/

After promoting the Bandwidth Alliance, a group of cloud and networking companies committed to discounting data transfer fees, hoping that AWS would join;Cloudflare highlighted last summer what Prince calls AWSs Egregious Egress and "Hotel California Pricing":

During the last ten years, industry wholesale transit prices have fallen an average of 23% annually. Compounded over that time, wholesale bandwidth is 93% less expensive than 10 years ago. However, AWSs egress fees over that same period have fallen by only 25%.

Analyzing the "Compelling Economics of Cloudflare R2", Coney Quinn, cloud economist at The Duckbill Group, explains how the new service could be used by existing AWS customers:

Cloudflare offers an "S3 proxy" of sorts; you can drop this in front of S3 or, frankly, any S3-compatible object store, which is effectively all of them. And suddenly the fun begins.

In a popular tweet thread, Quinn adds:

I'm really curious what position AWS is going to take on Cloudflare's free egress: 1) That's impossible, Cloudflare will go bankrupt doing this. 2) Yeah, you caught us, we've been ripping you off for years. Have a discount. 3) Complete silence.

The announcement has been discussed and well received by developers on Hacker News and Reddit. Claiming that Cloudflare is "Eating the Cloud from Outside In", Shawn Wang, developer experience at Temporal.io, writes:

Cloudflare took a part of the cloud nobody valued, gave away an insanely good free offering, and quietly accumulated an 80% market share. Meanwhile, when people think of Tier 1 AWS services, its Cloudflare equivalent, Amazon CloudFront, rarely gets any love.

A few users question the name R2, Rapid and Reliable, with Taloflow providing a name generator for future object storage services. R2 is still under development with a waitlist for access. It is expected to cost 0.015 USD per GB per month, not charging data egress and offering zero rate request charges until customers are making double-digit requests per second.

See more here:
Cloudflare Challenges AWS with R2 Storage and No Egress Fees - InfoQ.com

Read More..

WhatsApp Working On A Feature To Allow Users Better Manage Their Cloud Storage – Mashable India

Facebook's instant messaging platform WhatsApp is reported to be working on a new feature that'll allow its user to manage the chat backups on the cloud (Google Drive or iCloud). To be more specific, the new feature will allow WhatsApp users to manage the chat backups by eliminating unnecessary contents (like photos and documents) from their cloud backup.

According to the WABetaInfo report, the feature which is currently in the testing stage will initially be available to Android users only, which will allow them to select the content on their cloud. After offering end-to-end encrypted backups, the newly released version (WhatsApp for Android beta 2.21.21.7) will soon allow its users to manage the backup size. Termed as 'Manage backup size', the feature will allow WA users to manage the cloud backup by excluding specific media (like photos and videos) from being included in the next backup. While no specific release date has been announced yet, the WA team is expected to roll out the feature soon.

But what's the main reason behind this update? While the WhatsApp backups didn't count on your Google Drive, the report also suggested that Google may soon stop offering unlimited storage for WhatsApp backups. While no party has yet made any official announcement, Google may limit the storage space to 2000 MB per user, but even WABetaInfo has termed this to be a rumour yet.

Earlier this month, WhatsApp had rolled out a new end-to-end encrypted chat backup feature for iOS and Android users to raise the level of security currently offered on message backups. While the additional layer of privacy and security improves data security, Facebook CEO Mark Zuckerberg said, "WhatsApp is the first global messaging service at this scale to offer end-to-end encrypted messaging and backups."

SEE ALSO: Apple Airpods May Soon Help You Monitor Body Temperature And Posture: Report

Cover Image: Shutterstock

Read this article:
WhatsApp Working On A Feature To Allow Users Better Manage Their Cloud Storage - Mashable India

Read More..

How to change the date and time of a photo in Google Photos – Android Central

Google Photos is one of the greatest cloud storage services that exists for storing your images and videos. There are many options and features the app brings to the table. One being the ability to change the date and time of a photo in Google Photos. If you want to learn how to do so, follow the next steps to a T.

Tap the three dots in the upper-right corner.

Tap on the date to change it.

Tap on the time to change it.

After you have changed the date and time information of a photo, the updated details will sync across all your devices and appear in Google Photos everywhere else as well.

VPN Deals: Lifetime license for $16, monthly plans at $1 & more

Go to https://photos.google.com/ on your desktop.

Click on the image.

Click on the pencil icon next to the date and time.

Enter new date and time details.

Click Save after entering the new date and time.

We love Google Photos because it's wonderfully simple and incredibly accessible to all Android users. It nails the basics of backing up videos and images, editing tools, auto-sync, and sharing options. What's more, you can even use Photos to create a Photo Book and have it printed in a physical form. Now that's something your everyday gallery app can't do! So whether it's an image viewer you're looking for or a cloud storage platform, Google Photos knocks the ball out of the park.

See more here:
How to change the date and time of a photo in Google Photos - Android Central

Read More..

MadHive Deal Gives Google A Leg Up In The Cloud Wars – Forbes

Ready to use cloud with screen mode

As data assumes a greater place in the advertising universe, a skirmish of sorts has broken out among the major cloud storage companiesGoogle GOOG and Amazon AMZN in particularto win the hearts and minds of the advertising and media community.

I say skirmish because given the massive size of Google and Amazons cloud storage businesses, the amount of revenue they will get from advertisers and publishers is relatively small.

But in a battle for dominance, every win counts and so the battle lines have been drawn.

Google inadvertently intensified the migration of advertising data to the cloud by its decision to (eventually) do away with cookies, the digital trackers that allow brands to gather information on consumers as they traveled around the web.

While there will be short term pain, overall this is a positive development, since cookies were in many ways like TVs Nielsen ratings: a currency everyone agreed on because an industry had grown up around it and there was no other option, but a currency that many people suspected was far less accurate than it appeared to be.

The demise of cookies will force brands to begin collecting more of their own first party data and then finding wayscloud-based waysto ensure that data is privacy compliant and can then be matched, in a privacy compliant manner, with customer data from publishers and programmers.

The Rise Of The Clean Room

Data matching takes place on a cloud-based platform that is known as a clean room. Clean rooms provide a way to ensure that data from the advertiser, data from the publisher and data from third parties can be compared and matched in a way that ensures neither side actually sees the other sides data unless there is a match. This is critical as it ensures privacy compliance.

The more advertisers and their agencies come to rely on these clean room environments, the more money will be at stake, and if one of the giants can take a clear lead in this area, it will give them a slight leg up over the competition.

$100 Million Is A Big Deal

That is why Google must be very happy about a recent deal brokered by the consulting firm SADA that will see ad tech pioneers MadHive doubling last years initial $50 million in Googles cloud-based solutions. This brings MadHives total investment to $100 million, making it one of the biggest Google Cloud deals in adtech to date.

MadHive has long been one of the most innovative players in the space, rolling out machine learning and cryptography-based solutions to address industry problems like fraud and privacy.

Today, MadHive is a leader in infrastructure-as-a-service enterprise software, focused on accelerating local OTT (over the top TV, also known as CTV) reach extension with major broadcasters like Fox, Scripps and TEGNAs Premion.

By doubling its investment in Google Cloud, MadHive will strengthen its ability to deliver:

Baked in fraud detection and prevention: While fraud is considered to be largely a problem for digital advertisers, its becoming a real issue on OTT as well. MadHive has been a leader in helping to detect and prevent fraud, and the deal will allow them to expand this capability.

Advanced targeting capabilities that fully comply with GDPR and CCPA privacy regulations. As global privacy laws expand, advertisers need to ensure that the solutions they implement stay within the boundaries of these new regulations . MadHive offers cloud-based solutions that help ensure compliance while respecting consumers right to privacy.

Simplified, full-stack software that removes unnecessary middlemen and their fees. When I first met MadHive CEO Adam Helfgott many years ago, I was impressed by his understanding of how brands were being forced to pay an ad tax because of all the middlemen involved in digital advertising and his commitment to eliminating that sort of costly system on OTT. Cloud-based solutions help to simplify transactions and thus eliminate costly middlemen.

Interoperability across various screens and channels, including digital out of home, digital audio, display, and more. Consumers dont just watch TV or just watch videos onlinethey move across all available channels with great frequency. The ability to understand their behavior across all these screens and channels allows for better targeting and an end to overtargeting consumers based on their demographics.

Over the past year, MadHive has been in a period of hyper-growth as broadcasters and brands adopt our technology to power their cross-channel advertising efforts, said Adam Helfgott, CEO at MadHive. This increased investment in SADA and Google Cloud will allow MadHive to create an even stronger infrastructure that allows for lightning-speed insights and campaign optimizations, while solving widespread industry problems like fraud, transparency, privacy and interoperability for our clients.

Creating solutions for advertisers and publishers will be key for cloud computing services in the years ahead. The issue now is around getting traction, about scoring the most high profile wins up front so that the industry has you pegged as the front runner.

While it is still way too soon to declare a winner, Googles deal with MadHive indicates that they have the know-how to get the job done.

See the article here:
MadHive Deal Gives Google A Leg Up In The Cloud Wars - Forbes

Read More..

Teen arrested after national organization finds explicit photos, videos involving minors – FOX13 Memphis

MEMPHIS, Tenn. A 19-year-old teen is behind bars after what the National Center For Missing and Exploited Children (NCMEC) found on the teens computer.

Fredtravis Trey McKnight was booked on Oct. 13 in the Shelby County Jail for one count of sexual exploitation of a minor after Memphis Police received a tip from the NCMEC.

The NCMEC told MPD their report was initiated after an online cloud storage service reported one of the users uploaded media that was known to be child sexual abuse/exploitation material, an affidavit said.

The online cloud storage service provided the users email addresses and screenname of the suspect Trey McKnight.

According to the affidavit, NCMEC forwarded the complaint to MPD based on the geographical location of McKnights IP address from his computer.

MPD reviewed the information and found eight videos of minors engaging in sex acts plus one video of a nude minor engaging in lascivious display, the affidavit said.

After further investigation, MPD found the user to be a Comcast internet subscriber at a home off Foggy Ridge Cove in Hickory Hill.

McKnight was arrested and his bond was set at $70,000.

He is due in court on Oct. 21.

CLICK HERE TO DOWNLOAD

Trending stories:

2021 Cox Media Group

Original post:
Teen arrested after national organization finds explicit photos, videos involving minors - FOX13 Memphis

Read More..

To the stars: NetApp bringing cloud-native Astra Blocks and Files – Blocks and Files

NetApp, is announcing the availability of an early preview of a file-focussedaddition to its Astra family of Kubernetes products so that users get block stores and a cloud-native file store.

Astra Data Store (ADS) is a Kubernetes-native shared file unified data store for containers and virtual machines (VMs) with advanced enterprise data management and a standard NFS client. The software is based on NetApps enterprisedata management technologies meaning, we understand, ONTAP.

Eric Han, a NetApp VP of product management, said in a supplied statement: With Astra Data Store were giving customers more infrastructure options to build modern datacentres, with the ability to deploy world-leading primary storage and data management solutions directly into their Kubernetes clusters.

Back in August last year, Han blogged that: the Project Astra team has been redesigning the NetApp storage operating system, ONTAP, to be Kubernetes-native. We think this is the first appearance of cloud-native ONTAP functionality.

ADS has been designed to fix challenges for Kubernetes users, including the lack of mature shared file services, proprietary file clients, and managing data stores separately for virtual machines and containers. It is said to be one ofthefirstKubernetes-native, unified shared fileservicesfor containers and VMs, and offering multiple parallel file systems on the same resource pool.

The ADS software includes replication and erasure coding technologies for Kubernetes-native workloads so as to increase resiliency.

In the coming months NetApp will introduce more data services, hybrid, andmulti-cloudcapabilities by itself and co-developed with partners and customers.

The ADS preview will be publicly available over the coming months, with general availability targeted for the first half of 2022.

By converting its ONTAP storage software functionality to containerised code and moving it into the Kubernetes space, NetApp is making sure that it is in the front line for offering data storage and services to cloud-native applications and developers.

This means NetApp will be able to offer strong competition to cloud native startups such as OnDat, the renamed StorageOS, and Pures Portworx business unit. It will be able to reassure its existing customers that they have no need to move a risky startup to get such services they can stay with trusty NetApp instead. This message could help prevent DevOps people inside NetApps customer base choosing a cloud-native startup for their storage. And NetApp can also go to cloud-native developers outside its base and say it is a more reliable storage supplier than any young startup.

Follow this link:
To the stars: NetApp bringing cloud-native Astra Blocks and Files - Blocks and Files

Read More..

IBM sprays storage improvements across its Spectrums Blocks and Files – Blocks and Files

IBM has announced enhancements across its Spectrum storage software products, supporting Azure, boosting AIOps, speeding data to GPUs and adding a larger proprietary flash drive.

The news was revealed in an IBM blog with no identified author.

The blog summed things up by blandly announcing: Today, IBM is announcing new capabilities and integrations designed to help organisations reduce IT complexity, deploy cost-effective solutions and improve data and cyber resilience for hybrid cloud environments.

The announcements cover Spectrums Virtualize, Protect, Protect Plus, Scale, AIOps for FlashSystem and the ESS 3200, which runs Spectrum Scale software.

Spectrum Virtualize is the operating, management and virtualization software used in the Storwize and FlashSystem arrays and SAN Volume Controller. The Storwize brand was absorbed into the FlashSystem brand in February 2020.

Spectrum Virtualize for Public Cloud (SVPC) is available for the IBM public cloud and was made available on AWS in April 2019, providing a hybrid on-premises-to-AWS capability; its been a long time coming to Azure. IBM announced a forthcoming beta program for Spectrum Virtualize for Public Cloud on Azure in February this year and now, eight months later, the software is generally available.

That means on-premises FlashSystem and SAN Volume Controller deployments can have public cloud-based disaster recovery sites, can migrate data to SVPC in the cloud and support what IBM calls cloud DevOps. This set of disaster recovery, migration and cloud DevOps facilities can function between public clouds as well.

SVPC on Azure supports IBM Safeguarded Copy, which automatically creates isolated immutable snapshot copies designed to be inaccessible by software. That means it functions as ransomware data protection.

Will we see SVPC supporting the Google Cloud Platform? We think so.

IBMs FlashSystem AIOps capabilities are being boosted by acquired Turbonomic AI-powered Application Resource Management (ARM) and Network Performance Management (NPM) software technology.

In effect we have IBMs response to HPEs industry-leading InfoSight system monitoring and predictive analytics technology. IBM says:

Big Blue says this reduces the need for over-provisioning the arrays and means their density can be increased, by up to 30 per cent on average, with no performance impact. That is surely good news.

It gets better for FlashSystem users with Instana, Red Hat OpenShift, VMware m vSphere or other major hypervisors, since Turbonomic will observe the entire stack from application to array. In IBM speak, This enables all operations teams to quickly visualise and automate corrective actions to mitigate performance risk caused by resource congestion, while safely increasing density.

Other IBM storage news:

There are no public numbers to allow a comparison with other GDS supporting suppliers such as DDN, Pavilion, VAST Data and WekaIO.

In March we reported that Spectrum Scale delivered 94GB/sec to Nvidia GPUs across GDS. A 100 per cent increase would take this to 188GB/sec still shy of Pavilions 191GB/sec.

Read the rest here:
IBM sprays storage improvements across its Spectrums Blocks and Files - Blocks and Files

Read More..