Page 2,090«..1020..2,0892,0902,0912,092..2,1002,110..»

Skiff is launching Skiff Mail to take on Gmail with encryption – The Verge

Skiff has spent the last couple of years developing a privacy-focused, collaborative document editing platform that you could most succinctly describe as encrypted Google Docs. Now, its coming for Gmail. The company is launching an email service called Skiff Mail that aims to be, well, encrypted Gmail and eventually much more than that.

Ultimately, Skiff co-founder and CEO Andrew Milich says Skiff wants to build a complete workspace, something as sweeping and broad as Microsoft 365 or Google Workspace. But the only way to do that is to solve email, which is, in so many ways, the core of both platforms. Its the most private corpus of our lives, you know? Milich says. In an effort to keep peoples most important information safe which includes doctors notes, confirmation numbers, work emails, family chats, and everything else he says email felt like a logical and critical next step.

Emails also a potential growth hack for Skiff. Its really, really hard to move off of a service youre using today when your main identity, Milich says, your main communication layer, the way youre actually living on the internet, is outside of that. In other words, for every user going to Skiff Mail instead of Gmail, thats another person for whom Skiffs other products are just a click away. Right now, Skiff is free for personal use and makes money through business subscriptions; Milich didnt say what Skiffs plans are for email but said that advanced features will likely be paid ones down the road.

Rather than reinvent the wheel and come up with some Hey-level new paradigm for how email works, Skiff is starting fairly simple. The app right now, which works on web, Android, and iOS, looks like Gmail minus all the color and UI cruft. Its almost all text, with folders on the left and a reading view for your current message on the right. In other words, its an email app a pretty barebones one at that. Right now, theres no support for custom domains. You cant check your Gmail in Skiff, and theres not even much in the way of automation or organization tools. Milich says the simplicity is mostly by design: We didnt go super-ambitious and say, like, Were going to reinvent email with a new set of inboxes, a new set of filter rules, a new set of templates. The goal instead was to make all the important stuff text editing, search, managing attachments work really well.

Thats not to say theres no ambition to Skiff Mail. Its just that Milichs whole theory is that this privacy-first app strategy only works if people actually like using the apps. So many apps and services focused on privacy and security practically scream their values at you. The apps are harder to use, force you to manage more systems or click through a thousand warning messages, or just look like they were built by cryptographers rather than designers. (Because usually, they were!) One Skiff advisor told me many of these products look more like advocacy campaigns than competitive products. Skiffs trying to live all those same values: the company often publishes its research, and much of its code is open source but in a much more user-friendly package.

Get Milich talking long enough, though, and hell start to veer into much funkier territory. One of Skiffs recent projects has been to integrate its document platform with the IPFS protocol, a decentralized networking layer that users can now choose to use to store their data. Milich also has ideas about bringing Skiff Mail to the Web3 community. He imagines users with .ETH domain names using those addresses for totally encrypted and decentralized messaging, for instance, or maybe enabling wallet-to-wallet communication via MetaMask integration. Encryption and public key/private keys are so much about what identity means at Skiff, Milich says, and its also what were seeing identity become in web3.

Theres increasing evidence that Gmail but private is a compelling offer for many. Proton, the maker of ProtonMail, said last year that it has more than 50 million users, while platforms like Fastmail and Librem Mail continue to grow as well. Gmail remains the behemoth in the market, effectively the only company that actually matters in email, but those looking for something different have more choices than ever.

Still, even if Skiff could figure out how to build the greatest and most private email system ever conceived, getting people to switch email providers is a nearly impossible task. The inertia is enormous. Switching email accounts is like changing phone or credit card numbers, the kind of thing you only do when absolutely necessary. Thats why most companies dont even try to take on Gmail. Even the majority of email apps that do exist are mostly front-ends on Gmail, not wholesale rethinks of the system. Milich says Skiff has some ideas about how to ease the transition but acknowledged that its a huge hurdle.

One of the tricky things about the idea of private email is that, by design, nobody can actually control email. It would be easy enough for Skiff to build an encrypted email platform if it was just Skiff users emailing other Skiff users, but thats not how email works. Instead, the team has tried to build a tool that scales up and down the security spectrum. When Skiff users do email other Skiff users, everything is encrypted by default and easy for senders to revoke or verify, but when youre emailing outside the ecosystem, the SMTP protocols still work.

Milich hopes that as more providers embrace privacy, theyll build tools to match and, by extension, improve the whole ecosystem. But he figures that, even for now, if the least Skiff can do is say well keep your most important communication safe, even from us, that counts for something.

Read more here:
Skiff is launching Skiff Mail to take on Gmail with encryption - The Verge

Read More..

Logarithmic Finance Might Be The Next Big Crypto In World Like Ethereum And Bnb | Mint – Mint

Problem-solving is one of the most common features of any top-tier cryptocurrency. Cryptocurrencies with this property can disrupt the cryptocurrency market. From Bitcoin (BTC) to Ethereum (ETH) and from Binance Coin (BNB) to Ripple (XRP), each of these giant cryptocurrencies ranks high in problem-solving capabilities. Logarithmic Finance (LOG) is a recently launched cryptocurrency aimed at overturning the crypto world. It has all the features that will eventually become the next big thing in the crypto market.

Logarithmic Finance (LOG)

Logarithmic Finance (LOG) aims for the next generation of decentralised financial and trading protocols, enabling seamless connectivity and interaction between early blockchain innovators and investors. The raising of funds in a decentralised world has always been considered a daunting task, with several models designed for this purpose criticised for deficiencies. These disadvantages include the high cost of time to market, lack of financial security, and low-budget innovators.

Taking advantage of these shortcomings, Logarithmic Finance (LOG) delved into this market and addressed the issues that existed at both ends. It proposes the idea of a liquidity pool that fills the gap and acts as a bridge. This idea of liquidity pools enhances the purchasing power of innovators seeking first market access across open blockchain networks. The basic idea behind the creation of this cryptocurrency is an interactive community, the development of advanced features for project innovators and liquidity providers, and a true wonder of DeFi innovation at a secure and scalable cost. Lets dig deep into its attributes to identify its true potential, and if it will be able to compete with giants of the crypto industry.

Fully Homomorphic Encryption

The goal is to provide strong security for the switching mechanism available to the end-user through full homomorphic encryption, eliminating the need to decrypt packets while the computational process is taking place behind the scenes. The possibility can be understood from the fact that fully homomorphic encryption allows arbitrary computation of encrypted data.

On-chain Data

In addition to waterfall project management, DevOps best practices, and implementation of fully homomorphic encryption, the platform collects feedback on on-chain data from time to time. Logarithmic Finance (LOG) is critical because it helps engineering and UI/UX design teams make the necessary changes to the interface and other features to provide a seamless experience for users on the platform.

Multi-chain

Logarithmic Finance (LOG) is becoming a fully decentralised Layer 3 exchange protocol. In this regard, it is important to combine interoperability between heterogeneous blockchains with interchain communication. Since this is a complex implementation, these integrations should be done in a multi-step deployment before replacing the pseudo-centralised bridge with a fully distributed consensus mechanism.

Cross-chain

Cross-chain integration facilitates multiple use cases and extends the reach of innovators and investors on the platform. For example, innovators will be able to auction Ethereum (ETH) tokens to the NEO network to take advantage of low-cost transaction fees and scalability. In addition, cross-chain integration also supports P2P transactions between different blockchains.

Inexpensive Gas Fee

Its experienced development team followed a minimal approach, including a clean modular code structure, to design a robust code architecture for the platform. In addition, the platform ensures that only critical data is pinned to the blockchain, optimising the resources deployed. By combining all these practices, you can balance all transactions made in journal finance and reduce gas charges.

NFT Integration

Introduction of the NFT auction function to the platform, dedicated to LOG token owners. Various cryptocurrencies or stable coins can be exchanged for irreplaceable tokens by project innovators. After the cross-chain integration rollout is complete, more stable coins and networks will be introduced, which ultimately results in enhancing the NFT exchange experience.

Disclaimer: This article is a paid publication and does not have journalistic/ editorial involvement of Hindustan Times. Hindustan Times does not endorse/ subscribe to the contents of the article/advertisement and/or views expressed herein.

The reader is further advised that Crypto products and NFTs are unregulated and can be highly risky. There may be no regulatory recourse for any loss from such transactions.

Hindustan Times shall not in any manner, be responsible and/or liable in any manner whatsoever for all that is stated in the article and/or also with regard to the views, opinions, announcements, declarations, affirmations etc., stated/featured in the same. The decision to read hereinafter is purely a matter of choice and shall be construed as an express undertaking/guarantee in favour of Hindustan Times of being absolved from any/ all potential legal action, or enforceable claims. The content may be for information and awareness purposes and does not constitute financial advice.

Subscribe to Mint Newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Read more:
Logarithmic Finance Might Be The Next Big Crypto In World Like Ethereum And Bnb | Mint - Mint

Read More..

Data exposure at the Texas Department of Insurance. ICCL report details RTB ad tracking. Credit card scraping operation. – The CyberWire

At a glance.

A state audit has determined that the personal information of nearly 2 million individuals who filed compensation claims with the Texas Department of Insurance (TDI), an agency that oversees the states insurance industry, was exposed for nearly three years. According to the audit, released this week, the compromised data includes Social Security numbers, addresses, dates of birth, phone numbers, and employee injury info and was publicly accessible online from March 2019 to January 2022. The Texas Tribune reports the leak was the result of a flaw in the programming code in the web application used by TDI to manage workers compensation data. TDI spokesperson Ben Gonzalez explained, We found the issue was due to programming code that allowed internet access to a protected area of the application. We fixed the programming code issue and put the TDI web application back online. We began an investigation to find the nature and scope of the issue. Gonzalez added that the investigation did not uncover any evidence that the data had been misused. Nonetheless, Insurance Business America adds, TDI will send notification letters to the impacted individuals including instructions on how they can enroll for free credit monitoring.

Amit Shaked, CEO, Laminar, finds the kind of error implicated in this incident regrettable. He wrote, "This event is truly unfortunate as it is not due to an attack or malicious activity. It was due to a missed code glitch that left personal data exposed for years. Todays digital world requires layering on data-centric security where policies are at the data object level, like detecting excessive exposure in this case. To combat the growing threat to data protection in the cloud, data security teams require a set of cloud native tools that are automated and always continuously monitoring. These automated solutions will transform security teams from gatekeepers to enablers of data democratization."

It's also another case of abused privileged credentials, Arti Raman, CEO and Founder of Titaniam, wrote. As this incident proved, information can be accessed using privileged credentials, or strictly from a code glitch, allowing not only the general public to see this information, but hackers to steal underlying data. To keep customer PII safe and minimize the risk of extortion, encryption, specifically data-in-use encryption, also referred to as encryption-in-use, is recommended. Data-in-use encryption provides unmatched immunity. Should adversaries break through perimeter security infrastructure and access measures, or simply gain access through a technical error, data-in-use encryption keeps the data and IP encrypted and protected even when it is being actively utilized. This helps neutralize all possible data-related leverage and limits the need for breach disclosure.

Neil Jones, director of cybersecurity evangelism at Egnyte, notes that the data maintained by this agency inevitably includes a great deal of PII. "The recent data breach at the Texas Department of Insurance is especially concerning because worker's compensation data inherently includes PII (Personally Identifiable Information) and PHI (Protected Health Information), which are potential treasure troves for cyberattackers. Although there's no current evidence that the breached information has been used maliciously, it is not uncommon for attackers to wait for just the right time to post their breached data to the Dark Web," he writes. "There are several key lessons that can be learned from this incident:

Erfan Shadabi, cybersecurity expert with data security specialists comforte AG, notes the special responsibility of state agencies: We depend on the state agencies to provide us with a basic level of security against all threats. The recent incident with the Texas Department of Insurance in which the personal information of 1.8 million workers has been exposed should underscore the need for data-centric security such as tokenization or format-preserving encryption to be applied to sensitive data wherever it resides in order to render that data incomprehensible and thus worthless for exploitation if bad actors get ahold of it. Preventing attacks and breaches is not 100% foolproof, so we can only hope that governmental agencies have instituted the mitigating measures of data-centric security applied directly to data in case sensitive information falls into the wrong hands.

On Tuesday the Irish Council for Civil Liberties (ICCL) released a report including new data on what its calling the biggest data breach ever recorded: the real-time-bidding (RTB) systems abuse of web users info for tracking and ad targeting. According to the report, through the use of RTB, a surveillance-based ad auction system, Google and other tech giants have been processing and sharing user data billions of times per day. The ICCL explains, [RTB] tracks and shares what people view online and their real-world location 294 billion times in the U.S. and 197 billion times in Europe every day.

Figures in the ICCLs report, obtained from a confidential source, show that users in the US state of Colorado and the UK are among the most exposed by the system, with 987 and 462 RTB broadcasts respectively per person per day. Americans have their online activity and real-world location exposed 57% more often than Europeans, likely due to differences in privacy regulations across the two regions. The biggest culprit, Google, allows 4,698 companies to receive RTB data about US users, while Microsoft says it may send data to 1,647 companies. Questions have been raised about how RTB could be exposing sensitive data individuals share online, from womens fertility cycles stored in period tracking apps, to Black Lives Matter protestors locations, to the romantic histories of users of Grindr and other dating apps.

The report could have repercussions for European regulators in particular, given that the General Data Protection Regulation (GDPR) has been in place since May 2018 but regulators have been seemingly reluctant to penalize the adtech industry. Johnny Ryan, senior fellow at the ICCL, told TechCrunch, As we approach the four year anniversary of the GDPR we release data on the biggest data breach of all time. And it is an indictment of the European Commission, and in particular commissioner [Didier] Reynders, that this data breach is repeated every day.

An FBI Flash noticewarns that unidentified threat actors were scraping credit card data from an unnamed US business by injecting malicious PHP Hypertext Preprocessor code into the business online checkout page. The scraped data was being sent to an actor-controlled server spoofing a legitimate card processing server. The attackers also established backdoor access to the victims system by modifying two files within the checkout page. The notice details new indicators of compromise for e-commerce sites and lists recommended mitigations, which include updating and patching all systems, change default login credential, monitor e-commerce environment requests for possible malicious activity, segregating network systems, and secure all websites transferring sensitive information by using secure socket layer (SSL) protocol.

Read more here:
Data exposure at the Texas Department of Insurance. ICCL report details RTB ad tracking. Credit card scraping operation. - The CyberWire

Read More..

How the Online Safety Bill jeopardises the foundation of security online – PoliticsHome

Sheetal Kumar, Head of Global Engagement and Advocacy| Global Partners Digital

Undermining encryption means the Online Safety Bill in its current form is not fit for purpose.

On the heels of the UK's signature on a declaration to protect human rights, fundamental freedoms, and the free flow of information online, the UK Online Safety Bill does the opposite by undermining a critical part of the equation: encryption.

The UKs Online Safety Bill was introduced into the House of Commons on the 17th of March. Despite its stated aim to make the UK the safest place online, it would create serious security and privacy vulnerabilities by introducing a new surveillance power that would disproportionately impact those that need it most - especially vulnerable groups, including children. Clause 103(2) is particularly worrisome because it gives OfCom the power to undermine the same human rights the UK recently committed to uphold in the Declaration for the Future of the Internet.

The bill is lengthy and clause 103(2) b has not received much attention. However, this is a dangerous measure that puts the lives and rights of so many at risk by undermining encryption - and it must be stopped.

Encryption is a critical technology that helps Internet users keep information and communications confidential between the sender and intended receiver. Forty-five technologists, security experts, and NGOs, including members of the Global Encryption Coalition, recently published an open letter highlighting how the Online Safety Bill threatens end-to-end encryption, the strongest form of this security tool. The letter notes that clause 103(2) b could result in notices that would require that providers of such services introduce scanning capabilities into their platforms to scan all user content. The global technology company Apple made a similar proposal for its messaging services last year and, following outcry from security experts, withdrew the plan. It was unworkable then and it remains unworkable now.

Millions of people worldwide rely on encryption for their personal security in times of crisis. For instance, the UKs efforts to try to get people in conflict zones like Afghanistan and Ukraine to safety would be significantly hindered without the security assured by private messaging apps and communications. Moreover, the legislation poses a serious threat to the health of our national economy by creating high costs to comply with the legislation, and the associated costs of leaving all businesses at greater risk of cyber crime with backdoors to encrypted messages. This has already happened in Australia, as a result of the Telecommunications and other Legislation Amendment (Assistance & Access) Act (TOLA) law.

Such scanning cannot be accomplished on end-to-end encrypted services because no one, including the provider, has access to the content carried on that service except for the sender and the intended recipient(s). As a result, such a requirement would require service providers to compromise or abandon end-to-end encryption, and would set a dangerous precedent of introducing new surveillance technologies into the devices we use everyday. These technologies could be exploited by criminals and hostile governments, thereby undermining personal and national security. Beyond these concerns, such an approach could be replicated by other governments, including in countries with weak democratic institutions. It also marks a stark departure from the EUs prohibition on member states to oblige general monitoring of communications. As a result, it risks misalignment with one of the UKs largest trading partners.

Strong encryption protects private information and is integral to the ability to do business, work securely, and build and maintain relationships that are vital to everyday life. Fighting crime is critical, but there are ways to do it without putting our personal safety, human rights, and digital economy at risk of harm. In a world where we increasingly rely on digital technology, users need these everyday digital tools to be secure. Clause 103 (2) b of the Online Safety Bill would have a detrimental impact on the UK and Internet users around the world, and for that reason it should be dropped.

For more information about why the Online Safety Bill needs to change, please click here.

PoliticsHome Newsletters

Get the inside track on what MPs and Peers are talking about. Sign up to The House's morning email for the latest insight and reaction from Parliamentarians, policy-makers and organisations.

Originally posted here:
How the Online Safety Bill jeopardises the foundation of security online - PoliticsHome

Read More..

Why Banks Have Been Slow to Embrace the Cloud and What Could Soon Change That – BizTech Magazine

At this time, high-level enterprise functions such as collaboration tools, customer relationship management and IT operations are the most likely cloud workloads for financial institutions. More fundamental functions, including risk and compliance, capital markets, and consumer and commercial banking, make up 4 percent or less of cloud workloads.

However, trends in cloud computing appear to suggest a firmer embrace in the years to come. Per data from the American Bankers Association, at least 90 percent of banks maintain at least some data, applications or operations in the cloud, and 91 percent expect to increase cloud use in the coming years. This will most likely be for functions that can improve the customer experience, such as digital banking apps and CRM tools.

There are genuine reasons for banks to be cautious with cloud computing. These dont stem from technical hesitancy, but rather from a desire to be careful with issues of risk, which carry different meaning for the financial sector than in other fields.

In 2019 testimony before a task force of the House Financial Services Committee, Paul Benda, the American Bankers Associations senior vice president for operational risk and cybersecurity, explained why the industry has traditionally been slow to embrace the cloud, citing a mix of regulatory concerns, security desires and a goal of risk management.

Although there are compelling business and operational resilience reasons for financial institutions to consider the use of the cloud, it is critical that financial institutions first put in place strong and effective risk mitigation strategies to address the risks that are unique to the cloud, Benda told the committee.

EXPLORE:Can cloud computing help financial institutions manage regulatory compliance?

His commentary points to Title V of the Gramm-Leach-Bliley Act, a 1999 law that requires banks to respect the privacy of its customers and to protect the security and confidentiality of those customers nonpublic personal information.

These standards apply equally, regardless of whether that information is stored or handled by a financial institution or its vendor on the financial institutions own system or in a third-party cloud, Benda added in his testimony. These standards also require that financial institutions have in place incident response programs to address security incidents involving unauthorized access to customer information, including notifying customers of possible breaches when appropriate.

Despite the concerns about liability and organizational risk, the banking industry collectively sees high potential in the cloud. Benda emphasized a willingness for a more collaborative approach.

The challenges in this space are complex, and we believe that every stakeholder wants to ensure that the security of these critical systems is maintained and at the same time innovation is not hindered, he explained.

Click the banner below to unlock exclusive cloud content when you register as an Insider.

Go here to see the original:
Why Banks Have Been Slow to Embrace the Cloud and What Could Soon Change That - BizTech Magazine

Read More..

Scalability and elasticity: What you need to take your business to the cloud – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

By 2025, 85% of enterprises will have a cloud-first principle a more efficient way to host data rather than on-premises. The shift to cloud computing amplified by COVID-19 and remote work has meant a whole host of benefits for companies: lower IT costs, increased efficiency and reliable security.

With this trend continuing to boom, the threat of service disruptions and outages is also growing. Cloud providers are highly reliable, but they are not immune to failure. In December 2021, Amazon reported seeing multiple Amazon Web Services (AWS) APIs affected, and, within minutes, many widely used websites went down.

So, how can companies mitigate cloud risk, prepare themselves for the next AWS shortage and accommodate sudden spikes of demand?

The answer is scalability and elasticity two essential aspects of cloud computing that greatly benefit businesses. Lets talk about the differences between scalability and elasticity and see how they can be built at cloud infrastructure, application and database levels.

Both scalability and elasticity are related to the number of requests that can be made concurrently in a cloud system they are not mutually exclusive; both may have to be supported separately.

Scalability is the ability of a system to remain responsive as the number of users and traffic gradually increases over time. Therefore, it is long-term growth that is strategically planned. Most B2B and B2C applications that gain usage will require this to ensure reliability, high performance and uptime.

With a few minor configuration changes and button clicks, in a matter of minutes, a company could scale their cloud system up or down with ease. In many cases, this can be automated by cloud platforms with scale factors applied at the server, cluster and network levels, reducing engineering labor expenses.

Elasticity is the ability of a system to remain responsive during short-term bursts or high instantaneous spikes in load. Some examples of systems that regularly face elasticity issues include NFL ticketing applications, auction systems and insurance companies during natural disasters. In 2020, the NFL was able to lean on AWS to livestream its virtual draft, when it needed far more cloud capacity.

A business that experiences unpredictable workloads but doesnt want a preplanned scaling strategy might seek an elastic solution in the public cloud, with lower maintenance costs. This would be managed by a third-party provider and shared with multiple organizations using the public internet.

So, does your business have predictable workloads, highly variable ones, or both?

When it comes to scalability, businesses must watch out for over-provisioning or under-provisioning. This happens when tech teams dont provide quantitative metrics around the resource requirements for applications or the back-end idea of scaling is not aligned with business goals. To determine a right-sized solution, ongoing performance testing is essential.

Business leaders reading this must speak to their tech teams to find out how they discover their cloud provisioning schematics. IT teams should be continually measuring response time, the number of requests, CPU load and memory usage to watch the cost of goods (COG) associated with cloud expenses.

There are various scaling techniques available to organizations based on business needs and technical constraints. So, will you scale up or out?

Vertical scaling involves scaling up or down and is used for applications that are monolithic, often built prior to 2017, and may be difficult to refactor. It involves adding more resources such as RAM or processing power (CPU) to your existing server when you have an increased workload, but this means scaling has a limit based on the capacity of the server. It requires no application architecture changes as you are moving the same application, files and database to a larger machine.

Horizontal scaling involves scaling in or out and adding more servers to the original cloud infrastructure to work as a single system. Each server needs to be independent so that servers can be added or removed separately. It entails many architectural and design considerations around load-balancing, session management, caching and communication. Migrating legacy (or outdated) applications that are not designed for distributed computing must be refactored carefully. Horizontal scaling is especially important for businesses with high availability services requiring minimal downtime and high performance, storage and memory.

If you are unsure which scaling technique better suits your company, you may need to consider a third-party cloud engineering automation platform to help manage your scaling needs, goals and implementation.

Lets take a simple healthcare application which applies to many other industries, too to see how it can be developed across different architectures and how that impacts scalability and elasticity. Healthcare services were heavily under pressure and had to drastically scale during the COVID-19 pandemic, and could have benefitted from cloud-based solutions.

At a high level, there are two types of architectures: monolithic and distributed. Monolithic (or layered, modular monolith, pipeline, and microkernel) architectures are not natively built for efficient scalability and elasticity all the modules are contained within the main body of the application and, as a result, the entire application is deployed as a single whole. There are three types of distributed architectures: event-driven, microservices and space-based.

The simple healthcare application has a:

The hospitals services are in high demand, and to support the growth, they need to scale thepatient registration and appointment scheduling modules. This means they only need to scale the patient portal, not the physician or office portals. Lets break down how this application can be built on each architecture.

Tech-enabled startups, including in healthcare, often go with this traditional, unified model for software design because of the speed-to-market advantage. But it is not an optimal solution for businesses requiring scalability and elasticity. This is because there is a single integrated instance of the application and a centralized single database.

For application scaling, adding more instances of the application with load-balancing ends up scaling out the other two portals as well as the patient portal, even though the business doesnt need that.

Most monolithic applications use a monolithic database one of the most expensive cloud resources. Cloud costs grow exponentially with scale, and this arrangement is expensive, especially regarding maintenance time for development and operations engineers.

Another aspect that makes monolithic architectures unsuitable for supporting elasticity and scalability is the mean-time-to-startup (MTTS) the time a new instance of the application takes to start. It usually takes several minutes because of the large scope of the application and database: Engineers must create the supporting functions, dependencies, objects, and connection pools and ensure security and connectivity to other services.

Event-driven architecture is better suited than monolithic architecture for scaling and elasticity. For example, it publishes an event when something noticeable happens. That could look like shopping on an ecommerce site during a busy period, ordering an item, but then receiving an email saying it is out of stock. Asynchronous messaging and queues provide back-pressure when the front end is scaled without scaling the back end by queuing requests.

In this healthcare application case study, this distributed architecture would mean each module is its own event processor; theres flexibility to distribute or share data across one or more modules. Theres some flexibility at an application and database level in terms of scale as services are no longer coupled.

This architecture views each service as a single-purpose service, giving businesses the ability to scale each service independently and avoid consuming valuable resources unnecessarily. For database scaling, the persistence layer can be designed and set up exclusively for each service for individual scaling.

Along with event-driven architecture, these architectures cost more in terms of cloud resources than monolithic architectures at low levels of usage. However, with increasing loads, multitenant implementations, and in cases where there are traffic bursts, they are more economical. The MTTS is also very efficient and can be measured in seconds due to fine-grained services.

However, with the sheer number of services and distributed nature, debugging may be harder and there may be higher maintenance costs if services arent fully automated.

This architecture is based on a principle called tuple-spaced processing multipleparallel processors with shared memory. This architecture maximizes both scalability and elasticity at an application and database level.

All application interactions take place with the in-memory data grid. Calls to the grid are asynchronous, and event processors can scale independently. With database scaling, there is a background data writer that reads and updates the database. All insert, update or delete operations are sent to the data writer by the corresponding service and queued to be picked up.

MTTS is extremely fast, usually taking a few milliseconds, as all data interactions are with in-memory data. However, all services must connect to the broker, and the initial cache load must be created with a data reader.

In this digital age, companies want to increase or decrease IT resources as needed to meet changing demands. The first step is moving from large monolithic systems to distributed architecture to gain a competitive edge this is what Netflix, Lyft, Uber and Google have done. However, the choice of which architecture is subjective, and decisions must be taken based on the capability of developers, mean load, peak load, budgetary constraints and business-growth goals.

Sashank is a serial entrepreneur with a keen interest in innovation.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

Originally posted here:
Scalability and elasticity: What you need to take your business to the cloud - VentureBeat

Read More..

Stacklet Named a 2022 Cool Vendor in Cloud Computing by Gartner – Business Wire

ARLINGTON, Va.--(BUSINESS WIRE)--Stacklet, developers of the industry-first cloud governance as code platform based on the open source Cloud Custodian project, today announced it has been recognized in the 2022 Gartner Cool Vendors in Cloud Computing report. We think this recognition builds on its continued momentum, including the recently announced Stacklet SaaS Platform which makes it easier and frictionless for organizations to shift to governance as code model.

Innovating efficiently and securely in the cloud requires a paradigm shift from traditional approaches to governance. Governance as code is a new paradigm that allows cloud engineering, security, and FinOps teams to quickly understand, codify, and automate cloud governance for a frictionless experience for development teams and rapid cloud adoption.

Cloud Custodian, an open source project and part of Cloud Native Computing Foundation (CNCF), is rapidly becoming the de-facto standard for cloud governance as code with millions of downloads occurring globally each month. Stacklet Platforms extends Cloud Custodian with intelligent management capabilities like governance insights, real-time asset inventory, out-of-the-box policy packs, and advanced communications to make it easier for DevSecOps and FinOps teams to automate and enforce governance policies at scale.

"We believe being named a Cool Vendor by Gartner is a strong recognition of how governance as code and Stacklet can help organizations scale operations in the cloud," said Travis Stanfield, co-founder, and CEO, of Stacklet. "We are looking forward to continuing our momentum and helping our customers control costs and be secure across multiple cloud platforms in a way that doesn't hinder developer velocity."

Supporting Resources

You can access the full report here. Gartner Cool Vendors in Cloud Computing, Arun Chandrasekaran, Sid Nag, et al, 26, April 2022.

Gartner does not endorse any vendor, product or service depicted in our research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

GARTNER and COOL VENDORS are registered trademarks and service marks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.

About Stacklet

Stacklet was founded by the creator and lead maintainer of Cloud Custodian, an open source cloud native security and governance project used by thousands of well-known global brands today. Stacklet provides the commercial cloud governance platform that accelerates how organizations manage their security, asset visibility, operations, and cost optimization policies in the cloud. For more information, go to https://stacklet.io or follow @stackletio.

See the original post here:
Stacklet Named a 2022 Cool Vendor in Cloud Computing by Gartner - Business Wire

Read More..

Multi-cloud: balancing the cloud concentration regulation risk with the innovation reward – Finextra

Regardless of size and business mix, most financial institutions have come to understand how cloud and multi-cloud computing services can benefit them. There are cost benefits when it comes to scale, deploying new services and innovating. There are security and resiliency benefits that can be difficult and expensive to replicate on-premises, especially for smaller institutions trying to keep pace with rapidly changing standards. And there is geographic access to new markets from China to Canada that require deployment of local, in-country systems under emerging sovereignty laws.

However, as the industry continues to embrace cloud services, regulators are becoming more aware of the challenges associated with cloud computing, especially those that could expose financial institutions to systemic risks potentially undermining the stability of the financial system. The Financial Stability Board (FSB) and the European Banking Authority have urged regulators worldwide to review their supervisory frameworks to ensure that different types of cloud computing activities are fully scoped into industry guidelines.

At the same time, public cloud provider outages have disproved the never fail paradigm, and there are growing calls for heightened diligence around cybersecurity risks. This is causing regulators to focus on cloud concertation risks as well because of the potential peril created when the technology underpinning global financial services relies on so few large cloud service providers.

So how do financial institutions balance the risk versus the reward of the cloud?

Understanding the risk

The concern over infrastructure concentration and consolidation is twofold. First is the systemic risk of having too many of the worlds banking services concentrated on so few public cloud platforms. Historically, this problem did not exist as each bank operated its own on-premises infrastructure. Failure in a data centre was always limited to one single player in the market.

Second is the vulnerability of individual institutions, including many smaller institutions, that outsource critical banking infrastructure and services to a few solution providers. These software-as-a-service hyperscalers also tend to run on a single cloud platform, creating cascading problems across thousands of institutions in the event of an outage.

In both cases, performance, availability, and security-related concerns are motivating regulators who fear that a provider outage, caused either internally or by bad external actors, could cripple the financial systems under their authority.

For financial services companies, the stakes of a service interruption at a single cloud service provider (CSP) rise exponentially as they begin to run more of their critical functions in the public cloud.

Regulators have so far offered financial institutions warnings and guidance rather than enacting new regulations, though they are increasingly focused on ensuring that the industry is considering plans, such as cloud exit strategies, to mitigate the risk of service interruptions and their knock-on effects across the financial system.

The FSB first raised formal public concern about cloud concentration risk in an advisory published in 2019, and has since sought industry and public input to inform a policy approach. However, authorities are now exploring expanding regulations, which could mean action as early as 2022. The European Commission has published a legislative proposal on Digital Operational Resilience aimed at harmonising existing digital governance rules in financial services including testing, information sharing, and information risk management standards. The European Securities & Markets Authority warned in September 2021 of the risks of high concentration in cloud computing services providers, suggesting that requirements may need to be mandated to ensure resiliency at firms and across the system.

Likewise, the Bank of Englands Financial Policy Committee said it believes additional measures are needed to mitigate the financial stability risks stemming from concentration in the provision of some third-party services. Those measures could include the designation of certain third-party service providers as critical, introducing new oversight to public cloud providers; the establishment of resilience standards; and regular resilience testing. They are also exploring controls over employment and sub-contractors, much like energy and public utility companies do today.

To get ahead of regulators, steps should be taken to address the underlying issues.

From hybrid to multi-cloud

Looking at the existing banking ecosystem, a full embrace of the cloud is extremely rare. While they would like to be able to act like challenger and neo banks, many of the largest and most technology-forward established banks and financial services firms have adopted a hybrid cloud architecture linking on-premises data centres to cloud-based services as the backbone of an overarching enterprise strategy. Smaller regional and national institutions, while not officially adopting a cloud-centric mindset, are beginning to explore the advantages of cloud services by working with cloud-based SaaS providers through their existing ISVs and systems integrators.

In these scenarios, some functions get executed in legacy, on-premises data centres and others, such as mobile banking or payment processing, are operated out of cloud environments, giving the benefits of speed and scalability.

Moving to a hybrid approach has itself been an evolution. At first, financial institutions put non-core applications in a single public cloud provider to trial its capabilities. Some pursued deployments on multiple cloud vendors to handle different tasks, while maintaining robust on-premises primary systems, both to pair with public cloud deployments and to power core services.

While a hybrid approach utilising one or two separate cloud providers works for now, the next logical step (taken by many fintech startups) is to fully embrace the cloud and, eventually, a multi-cloud approach that moves away from on-premises infrastructure entirely.

Solve for the cloud concentration risks

Recent service disruptions at the top public cloud providers remind us that no matter how many data centres they run, single cloud providers remain vulnerable to weaknesses created by their own network complexity and interconnectivity across sites. Disruptions vary in severity, but when an institution relies on a single provider for cloud services, it exposes its business to the risk of potential service shocks originating from that organisations technical dependencies.

By distributing data across multiple clouds, they can improve high availability and application resiliency without sacrificing latency. This enables financial services firms to distribute their data in a single cluster across Azure, AWS, and Google Cloud while also distributing data across many regions available across these CSPs.

This is particularly relevant for financial services firms that must comply with data sovereignty requirements, but have limited deployment options due to sparse regional coverage on their primary cloud provider. In some cases, only one in-country region is available, leaving users especially vulnerable to disruptions in cloud service.

Going beyond the regulations

Beyond the looming regulatory issues, there are a number of practical business and technology limitations of a single-cloud approach that the industry must address to truly future-proof their infrastructure.

Geographic constraints: not all cloud service providers operate in every business region and the availability of local cloud solutions grows increasingly important as more countries adopt data sovereignty and residency laws designed to govern how data is collected, stored and used locally.

Vendor lock-in: there is a commercial risk in placing all of an institution's bets on one cloud provider. The more integration with a single cloud provider, the harder it becomes to negotiate the cost of cloud services or to consider switching to another provider.

Security homogeneity: while CSPs invest heavily in security features, in the event of an infrastructure meltdown or cyberattack, a multi-cloud environment can give organisations the ability to switch providers and to back up and protect their data.

Feature limitations: cloud service providers develop new features asynchronously. Some excel in specific areas of functionality and constantly innovate, while others focus on a different set of core capabilities. By restricting deployments to one cloud services provider, institutions limit their access to best-of-breed features across the cloud.

With pressure building from regulatory bodies at the same time as consumers increasingly demanding premium product experiences from financial services institutions, harnessing multi-cloud can satisfy both. It provides redundancy, security and peace of mind as infrastructure is not solely dependent on one CSP, while also providing the features and space to innovate on the very best the industry has to offer. Now is the time to embrace multi-cloud.

Follow this link:
Multi-cloud: balancing the cloud concentration regulation risk with the innovation reward - Finextra

Read More..

IBM CEO Arvind Krishna On The Future Of Big Blue – Forbes

Arvind Krishna began his career with IBM in 1990 and has been the CEO of IBM since April 2020 and Chairman since January 2021. Following his IBM Think 2022 Keynote in Boston, that I attended, Arvind sat down for a live round table with industry analysts to address a range of subjects in an open question format.

In this article, I will paraphrase Arvind's detailed and lengthy responses, hopefully providing clues to the future of Big Blue.

IBM CEO and Chairman Arvind Krishna conducts an analyst Q&A with VP of Analyst Relations, Harriet ... [+] Fryman

Technology is an undisputed source of competitive advantage

Today, few CEOs think technology doesn't matter. Most CEOs will say that technology is the single most protected line item even in a down market and a bad economy because technology will provide a sustainable advantage. Government leaders all want a robust technology industry to increase gross domestic product (GDP) at the country level.

I would concur based on my experience in talking with CEOs. Treating IT as a cost of doing business will ultimately lead to a loss of competitiveness in the marketplace.

Arvind is a technologist at heart, so it is not surprising that he has placed technology and the necessary ecosystem of partners at the heart of the business model.

I believe that, in the new wave of tech CEOs, technology is respected more than ever and is a benefit to IBM.

Creativity and co-creation are critical

Today's IBM is more open to partnerships across the stack, based on the reality that no single company has all the expertise and technology to meet customers' needs.

IBM wants partners to succeed but will still play a lesser but critical role in providing innovative technology. IBM plans to focus on key technologies such as artificial intelligence (AI), hybrid cloud, quantum computing, and blockchain.

The rise of the cloud drives this significant change in strategy. The days of selling software, hardware, and consulting in one package are in the rear-view mirror for IBM. Cloud computing meant disruption across every layer of the stack. We all know the acronyms now SaaS (software-as-a-service), PaaS (platform-as-a-service), and IaaS (infrastructure-as-a-service).

In a follow-up 1:1 with Arvind, he talked about products taking a front-row seat to consulting. Since his arrival, the company has shifted from 70% consulting- 30% products to 30% consulting- 70% product. This is a massive shift, and, to me, a "product guy", says everything.

IBM's newly announced deals with AWS and SAP represent the theme of partner co-creation well. McDonalds is a good IBM client example, where IBM went so far as to acquire the tech arm of McDonalds, MCD Tech, to facilitate the new drive-through solution.

The hybrid cloud has come of age

Four years ago, there was general skepticism around the hybrid cloud with a preference for a single public cloud provider. That has changed as customers tried to avoid vendor lock-in, security, regulations, cost of moving data, and the desire to have a strategic architecture to address reality and stay flexible for the future. In four years, we went from a preference for one public cloud to the majority embracing a hybrid cloud model. Cloud is no longer a place but an operating model.

The hybrid cloud delivers the flexibility of deployment. The ability to deploy anywhere with security, scale, and ease of use with the end goal of frictionless development. There is also the incremental value (IBM believes it is two-and-a-half times more) from a hybrid cloud architecture than any singular architecture only on public or private.

There is no debate with me. I have written that the hybrid cloud model and multiple cloud providers are the norms for enterprises. The turning point for me was in 2018 when AWS announced Outposts, and the debate stopped. The public cloud began 15 years ago, and the hybrid cloud is in its infancy, so it will take years for the two to cross.

AI will transform every company and every industry

Technology is the only way to scale the business without linearly adding costs. Given the vast amount of data being created today on public clouds, private clouds, and at the edge, artificial intelligence is the only technology we know that can begin to do something with this data.

Given the shifts in labor and demographics, AI is the only option to automate and take complexity and cost out of enterprise processes.

AI will also play a critical role in cybersecurity. With labor shortages in the cybersecurity profession, artificial intelligence is the technology that will spot suspicious activity and bad actors.

I have written several articles detailing how organizations adopt AI to bring efficiency, productivity gains, and a return on investment. In these uncertain times, AI is a powerful differentiator for companies of all sizes to transform digitally.

Unlocking the full potential of Red Hat

Arvind was the driving force behind the acquisition of Red Hat in 2018 and the decision to keep the company autonomous.

Red Hat is one of the few companies that has managed to tap into open-source innovation and make a market out of it. The value of Red Hat comes because it can run on all infrastructures and work with all partners. The open-source culture is very different from a proprietary source culture because of the commitment that anything the Red Hat does will go back into upstream open-source.

I think if IBM can keep Red Hat independent on most vectors, the sky is the limit for the company. There are only two on-prem container platforms that are extending to the public cloud, Red Hat and VMware, with HPE fielding a compelling alternative.

Accepting corporate responsibility for diversity and climate change

For IBM, it is a fundamental business priority. It is vital to constantly reflect the demographics of the societies we live in. If done well, IBM can attract and retain employees.

IBM has committed to being net-zero without purchasing carbon offsets by 2030, twenty years before the Paris Accords goal.

I have long maintained that sustainability has become a fundamental business issue. I have written several articles that detailed how companies deal with the challenge. These challenges have become real business issues. It is not just a cost of doing business or an ESG (environmental, social, and governance) checkmark. I think sustainability offers a way to improve the business and lower costs.

Arvind is the first executive I had ever heard say that companies could save money through sustainability strategies.

Wrapping up

Unlike previous IBM CEOs, Arvind has prioritized communicating IBM's strategy and its value to industry analysts. His approach is very straightforward, he's open to criticism and receptive to feedback on how to improve.

As regular readers will know, I have followed IBM for several years. I believe IBM is on an improved tack, with a focus on AI and hybrid cloud. Incredibly enough, it is the leader in the next big step in the future of computing, quantum computing.

IBM is one of the few companies with the resources to tackle challenging problems that take years of persistence to make breakthroughs.

IBM has taken quantum computing from science fiction to where everything is now just science in ten years. It is hard to find another company with the same staying power. It's one of the only companies to do research still.

With Arvind at the reins and a group of brilliant people solving problems that make a big difference, I think IBM has a promising future.

Moor Insights & Strategy, like all research and analyst firms, provides or has provided paid research, analysis, advising, or consulting to many high-tech companies in the industry, including 8x8, Advanced Micro Devices, Amazon, Applied Micro, ARM, Aruba Networks, AT&T, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, Calix, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Digital Optics, Dreamchain, Echelon, Ericsson, Extreme Networks, Flex, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Google (Nest-Revolve), Google Cloud, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Ion VR, Inseego, Infosys, Intel, Interdigital, Jabil Circuit, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, MapBox, Marvell, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Mesophere, Microsoft, Mojo Networks, National Instruments, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nuvia, ON Semiconductor, ONUG, OpenStack Foundation, Oracle, Poly, Panasas, Peraso, Pexip, Pixelworks, Plume Design, Poly, Portworx, Pure Storage, Qualcomm, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Residio, Samsung Electronics, SAP, SAS, Scale Computing, Schneider Electric, Silver Peak, SONY, Springpath, Spirent, Splunk, Sprint, Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, TE Connectivity, TensTorrent, Tobii Technology, T-Mobile, Twitter, Unity Technologies, UiPath, Verizon Communications, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zebra, Zededa, and Zoho which may be cited in blogs and research.

Link:
IBM CEO Arvind Krishna On The Future Of Big Blue - Forbes

Read More..

Microsoft CEO Satya Nadella tells employees that pay increases are on the way – CNBC

Microsoft CEO Satya Nadella speaks during the Microsoft Annual Shareholders Meeting at the Meydenbauer Center on November 28, 2018 in Bellevue, Washington. Microsoft recently surpassed Apple, Inc. to become the world's most valuable publicly traded company.

Stephen Brashear | Getty Images News | Getty Images

Microsoft CEO Satya Nadella told staffers on Monday that the company is raising compensation as the labor market tightens and employees contend with increasing inflation.

A spokesperson for the company confirmed the pay increase, which was reported earlier by GeekWire.

"People come to and stay at Microsoft because of our mission and culture, the meaning they find in the work they do, the people they work with, and how they are rewarded," the spokesperson told CNBC in an email. "This increased investment in our worldwide compensation reflects the ongoing commitment we have to providing a highly competitive experience for our employees."

Inflation jumped 8.3% in April, remaining close to a 40-year high. Meanwhile, the U.S. economy continues to add jobs and unemployment has steadily been falling, reaching 3.6% last month. Tech companies have been responding with salary bumps.

Google parent Alphabet is adjusting its performance system in a way that will bring higher pay to workers, while Amazon committed to more than doubling maximum base pay for corporate employees.

Nadella told employees that the company is "nearly doubling the global merit budget" and allocating more money to people early and in the middle of their careers and those in specific geographic areas. He said the company is raising annual stock ranges by at least 25% for employees at level 67 and under. That includes several tiers in the company's hierarchy of software-engineering roles.

In the first quarter, Microsoft increased research and development costs, which include payroll and stock-based compensation costs, by 21%. The company bolstered spending in cloud engineering as Microsoft tries to keep pace with Amazon Web Services. Research and development growth has accelerated for five consecutive quarters.

While the biggest tech companies have been lifting pay to try and retain talent, some smaller companies have been implementing layoffs as the war in Ukraine and supply shortages strain their businesses. Carvana and Robinhood are among those that are cutting staff.

WATCH: Jefferies senior analyst Brent Thill says he's positive on cloud stocks long-term

Read more here:
Microsoft CEO Satya Nadella tells employees that pay increases are on the way - CNBC

Read More..