Page 1,494«..1020..1,4931,4941,4951,496..1,5001,510..»

Altcoin Tax Reporting Software With IRS 8949, Schedule D Pre … – Digital Journal

PRESS RELEASE

Published April 4, 2023

The Crypto Merchant has announced the inclusion of Koinly to its carefully curated list of software services for crypto owners. Koinly tracks crypto transactions and generates pre-filled IRS forms, simplifying the reporting process through automation.

More information is available at https://koinly.io/?via=49642CB0&utm_source=affiliate

According to The Crypto Merchant, its latest announcement is aimed particularly at crypto owners who risk falling behind on their payments owing to the lack of tax reporting knowledge.

"For the average crypto holder, it can be difficult to understand what tax applies to which transaction," a company spokesperson said. "Let's say you just decided to hold your crypto, and it grew in value. Is that a taxable event or not?"

Koinly will be able to simplify these calculations for the user, they added.

It can track transactions made on over 6,000 types of cryptocurrencies, including every token deployed on the Ethereum blockchain. Koinly's dashboard allows users to see the total amount they have invested in crypto, their income or losses, and the corresponding tax liability generated through these events.

Koinly is compatible with over 100 wallets, including hardware devices such as Ledger and Trezor. It also integrates with almost every crypto exchange available today, allowing users to extract financial data from each platform automatically.

Furthermore, the software can pre-fill IRS documents, particularly forms 8949 and Schedule D, which are used to report sales on assets like stocks and crypto. Special reports can also be generated for users of certain tax software that can be uploaded to one's account when creating a filing.

Tax report pre-filling is available in the most basic plan, which costs $49 per tax year and is limited to 100 reports. More premium plans are available for users with thousands of transactions.

"We enjoin crypto holders to file their taxes, especially if they suffered heavy losses this crypto winter," the representative said. "In fact, you'd want to report these losses if you want to reduce your liabilities, and Koinly is the perfect tool to see how much deductions you could claim."

To try Koinly for free, interested parties are advised to visit https://koinly.io/?via=49642CB0&utm_source=affiliate

The Crypto Merchant

609 South Ridgewood Avenue

United States

COMTEX_428324820/2824/2023-04-04T07:33:41

See the original post here:
Altcoin Tax Reporting Software With IRS 8949, Schedule D Pre ... - Digital Journal

Read More..

5 Cryptos That Underperformed Last Week – BeInCrypto

BeInCrypto looks at the five biggest altcoins losers in the entire crypto market this week, specifically from March 24 to 31.

The cryptocurrency prices that have fallen the most in the entire altcoin market are:

The STX price has fallen since reaching a yearly high of $1.31 on March 20. The same day, it created a bearish candlestick with a long upper wick.

On March 30, the price broke down from the $1 horizontal area and validated it as resistance (red icon). This renders the previous breakout a deviation (red circle).

If the decrease continues, the next closest support would be at $0.60.

However, if the STX price reclaims the $1 area again, it could increase to $1.30.

The MINA price has fallen since March 18. It created a lower high on March 24 and accelerated its rate of decrease afterward. On March 28, the price validated the $0.75 area as resistance (red icon)

If this leads to another drop, the MINA price could fall to $0.60.

However, if the price reclaims the $0.75 area, it could increase toward $0.90.

The AGIX price broke out from a descending resistance line on March 14. However, it failed to sustain its increase and fell to the line once more on March 28 (green icon). It is unclear if this will initiate another bounce.

In case it does, the closest resistance would be at $0.57.

However, if the AGIX price closes below the confluence of support levels at $0.35, it could break down toward $0.20.

The Aptos price has increased since March 10, when it bounced at the $10 horizontal support area (green icon). However, the bounce was weak, and the price has nearly returned to the area once more.

Whether the digital asset breaks down from the $10 area or breaks out from the current descending resistance line will likely determine the future trend.

A breakdown could lead to lows near $7 while a breakout could lead to an increase toward $14.

The DASH price broke down from an ascending support line on March 9, reaching a low of $43.53 two days later.

While the price bounced afterward, it failed to reclaim the ascending support line.

If the rejection continues, the digital currency could fall to the $48.30 support area again. However, an upward movement to $76 could occur if it reclaims the line.

For BeInCryptos latest crypto market analysis,click here.

In line with the Trust Project guidelines, this price analysis article is for informational purposes only and should not be considered financial or investment advice. BeInCrypto is committed to accurate, unbiased reporting, but market conditions are subject to change without notice. Always conduct your own research and consult with a professional before making any financial decisions.

Excerpt from:
5 Cryptos That Underperformed Last Week - BeInCrypto

Read More..

Ripple News: Dont Buy XRP Until This Happens, Warns Top Crypto Analyst – Coinpedia Fintech News

Cryptocurrency analyst Austin Arnold of Altcoin Daily has advised investors to avoid buying XRP until there is clarity on the ongoing SEC lawsuit. Despite XRPs potential, the lawsuit remains a major concern for investors. This warning comes as the cryptocurrency market continues to show signs of a strong incoming bullish run.

Austin, a long-time skeptic of XRP, has focused on chart analysis to make his assessment. He acknowledges that XRP has spent about a year ranging between $0.33 and $0.50, which has been an accumulation of supply. However, he notes that recent weeks have seen a shift, with XRP breaking above 55 cents and closing with a strong candle last week.

Austin says that if XRP can break past last weeks highs, theres not much in the way for the remittance coin until it reaches $0.85 or even a dollar. He suggests that the market may be reacting in anticipation of a favorable outcome for XRPs Ripple in the lawsuit. However, until there is clarity on the lawsuit, Austin advises investors to hold off on buying XRP.

Also Read: XRP Price Set to Explode: Historic Trends Point to Massive Surge in April Coinpedia Fintech News

The ongoing Ripple v SEC lawsuit has been a major factor in XRPs price movement. The SEC filed the lawsuit against Ripple Labs in December 2020, accusing the company and its executives of raising $1.3 billion through the sale of XRP in unregistered securities offerings. Since then, XRPs price has been on a rollercoaster ride, with its value rising and falling based on developments in the lawsuit.

However, interestingly, ever since the cryptocurrency market came out of the long-running bear market of 2022, XRP has been holding on quite nicely even while Bitcoin and Ether fell due to industry occurrences. The behavior of the Ripple-issued token has impressed several experts and critics alike. At press time, XRP was trading at $0.51.

This Might Interest You: XRP Is A Must-Have In Your Portfolio: Says Popular Youtuber Ben Armstrong Coinpedia Fintech News

So, are you planning to hold on to your XRP or will you sell?

View original post here:
Ripple News: Dont Buy XRP Until This Happens, Warns Top Crypto Analyst - Coinpedia Fintech News

Read More..

Haar Cloud Ltd. announces the launch of a new range of managed hosting services – EIN News

Haar Cloud Ltd. launches a new range of managed hosting services

Haar Cloud, a leading cloud solutions and IT infrastructure company, has just released a new range of Managed Hosting services.

Adrian Huma, co-founder Haar Cloud and Director

Understanding how complex it can be to start a new website or ecommerce store from scratch, Haar Cloud now offers a new range of cloud web hosting services, all fully managed and powered by an easy-to-use and performant cPanel control panel.

With these new managed hosting services, whether our customers own a startup or have a large company, they can choose a Haar web hosting plan customized to their needs and goals, said Adrian Huma, co-founder of Haar Cloud and Director.

The new Managed Hosting gives customers access to industry-leading systems and highly-skilled technicians, along with NVME SSD, 100% Uptime, 10Gbits servers, and Anti-DDoS protection.

"On top of everything, our new managed hosting range provides a website with optimal speed and security, pre-installed tools, such WooCommerce or WordPress, and 24/7 technical support with an average response time of 15 minutes," added Adrian Huma, co-founder of Haar Cloud.

Haar's new General Managed Hosting services include:

- General Hosting cPanel Small Plan provides 1 site/domain,100GB cloud storage and 250GB bandwidth, among other features, such as DDoS protection.

- General Hosting cPanel Medium Plan provides 10 sites/domains, 125GB cloud storage and 500GB bandwidth, among others, such as 50 Email addresses.

- General Hosting cPanel Professional Plan provides unlimited sites/domains, 150GB cloud storage, unlimited bandwidth, and 24/7 monitoring.

- General Hosting cPanel Scale Plan provides unlimited sites/domains, 300GB cloud storage, unlimited bandwidth, 24/7 monitoring, and Cloudflare enabled.

This new range of managed hosting services will help customers build a high-performance, secure online store at a low cost. The Haar team of hosting specialists will pre-install everything, making sure every website is up to date, being available 24/7 to answer any technical questions customers may have.

Haar Cloud is committed to delivering all the technology solutions businesses worldwide need, helping its customers achieve the best results using the right cloud technology, cyber security, and IT support services.

Customers can already access the new Managed Hosting range through the Haar Cloud client portal.

If you want to know more about Haar Cloud and Technology solutions, please visit http://www.hellohaar.com.

About Haar

Haar provides tailor-made cloud and IT infrastructure services, all delivered by accredited technologies and certified experts, with 24x7 support included. Were here to help you get the most from your technology with the best Cloud Infrastructure, Managed Hosting, Cyber Security and IT Consultancy solutions on the market. For more information, please visit http://www.hellohaar.com.

Ana DumbravescuHaar Cloud Ltd+44 161 768 3149email us hereVisit us on social media:FacebookLinkedIn

We are Haar!

Follow this link:
Haar Cloud Ltd. announces the launch of a new range of managed hosting services - EIN News

Read More..

Forest Admin launches cloud version of its low-code internal tool builder – TechCrunch

Image Credits: Christoph Wagner / Getty Images

French startup Forest Admin is launching a cloud-based version of its product. The company helps you create flexible back-end admin panels for operations teams. Essentially, Forest Admin helps development teams spend less time on back office tools so they can focus on the actual product.

With the cloud version, companies just have to integrate the service with their own SQL database. After that, they can start using Forest Admin to manage their business.

The onboarding is very similar to business intelligence tools, Forest Admin co-founder and CEO Sandro Munda told me. But BI tools mostly fetch data so that it can be transformed, analyzed and compiled into quarterly reports and reused in business planning meetings.

Forest Admin is all about interacting with your products data. Companies can also integrate the admin panel with third-party services like Stripe, Mixpanel and Intercom. Forest Admin users can then trigger actions and create workflows with different levels of permission in the company.

Unlike other internal tool builders, such as Retool, Forest Admin is focused on admin panels exclusively. It isnt designed to be an all-in-one internal tool builder because sophisticated services also tend to be complex.

For instance, a fintech company could use Forest Admin to review and validate documents and make sure it complies with KYC and AML regulation (know your customer and anti-money laundering) Qonto is one of the startups biggest customers, with 2,000 people using Forest Admin. An e-commerce company could also use Forest Admin to refund customers or order an item once again in case its been lost.

In addition to centralizing all your data, a tool like Forest Admin also makes it easier to interact with your data. Companies can filter their user base and create segments, update and delete records and more.

Currently, Forest Admin customers install a component on their servers. This agent can read your data and makes it accessible through an API. Forest Admin hosts the front-end interface on its own servers. When customers connect to their admin panels, Forest Admin fetches information from the component that is installed on your infrastructure.

With the new cloud version, it greatly lowers the barrier to entry as you dont have to install Forest Admins component on your servers. With the right firewall rules and tunneling software, your database should remain secure. Theres no data duplication, you make changes on your database directly, Munda said.

Our goal is really to attract a new segment of customers, he added. Previously, you needed to integrate Forests agent in your own application. If a company is already using high-level cloud services exclusively, they couldnt use Forest Admin before the release of the cloud-based version.

Many operations-driven companies already use Forest Admin, such as fintech, marketplace, mobility and healthcare companies. We are close to profitability but it isnt what were aiming for right now, Munda said. And the cloud product should help when it comes to bringing more revenue.

Read more:
Forest Admin launches cloud version of its low-code internal tool builder - TechCrunch

Read More..

The New Frontiers of Cybersecurity Exponential Increase in Complexity – Security Boulevard

Author:Itzik Kotler, CTO & Co-Founder, SafeBreach

The New Frontiers of Cybersecurity is a three-part thought-leadership series investigating the big-picture problems within the cybersecurity industry. In the first post, we explored the reasons malicious actors have been able to enhance their ability to execute and profit from attacks. In the second post, we discussed how the massive increase in endpoints and systems online has dramatically increased the attack surface. A differentbut equally criticaldimension that well discuss for our third and final installment is that alongside this increase in attack surface comes a significant increase in complexity that is plaguing security teams.

The simple combinatorial mathematics of the sheer increase in endpoints not only means a greater number of systems to manage but also much more complex network architectures and webs of connections underlying IT and technology infrastructure and systems. The rise of cloud computing added a further layer of complexity for individuals trying to keep their applications and data secure. For example, an organization like Twitter that is composed of thousands of microservices will have a vastly more complex endpoint infrastructure than an enterprise that is guarding a handful of servers or even a few cloud instances.

Rather than linear complexity increases with each new node, we see exponential increases in complexity for every added node. Then there is the element of time. It is hard enough to guard and proactively protect an IT infrastructure that is growing quickly but steadily and constantly. It is entirely another issue to protect an IT infrastructure with a growing number of endpoints or systems attached to IP addresses that only exist for short periods and then morph into something else.

This combinatorial complexity is the new reality of Kubernetes and containers, serverless computing, and IPv6 (the newer IP numbering structure that enables billions more endpoints and systems to have their own unique IP addresses). In Kubernetes and containers, new endpoints with IP addresses may spin up and shut down every hour or even on a minute-by-minute basis. Unlike the billions of connected devices, which are far more limited in compute resources and other restrictions, containers and serverless are general purpose and can be more easily adapted for almost any type of payload or attack.

So we are now in a world where anyone can provision hundreds or even thousands of general-purpose servers or lightweight computers with the push of a button. This means a lot more complexity to protect, but also that attackers can generate significantly more complex attacks. Remember, the nature of cloud computing is that it is open to everyone. This includes the Kubernetes engines offered by cloud providers, as well as more abstracted systems to scale up and manage large fleets of containers like Amazons Fargate platform.

We already see signs of this new complexity. A scan by security researchers in mid-2022 pulled in over 900,000 exposed Kubernetes management endpoints. To be clear, these endpoints were not necessarily vulnerable or unprotected. But in security, exposing endpoints provides attackers information they can use to create more targeted attacks. Likewise, public compute clouds have unpatched security flaws that can allow rogue users to break out of a container and potentially access the management plane of the public cloud. This can then allow them to attack other tenants in the cloud, violating the core proposition of secure multi-tenancy.

In the legacy world of tightly controlled network perimeters and less secure internal networks, there was little need to harden endpoints not designed to be exposed to the world. In the datacenter era, a firewall on the edge of the center guarded against unauthorized probes and kept everything private. Even a misconfigured internal server was not accessible to the world. A firewall engineer had to explicitly change firewall rules to open that server to access from the Internet. Today, the opposite is true, with open-to-the-Internet being the default state and the burden falling on developers, DevOps teams, and security teams to set up firewalls, API gateways, and other protections to guard against probes and attacks.

Kubernetes can (and often does) expose endpoints as default behavior, providing a handy map to attackers. We are already seeing attackers exploit the complexity of containers and Kubernetes as a new attack vector, driven in part by the elimination or limitation of favorite older vectors such as macros.

The big push behind the cloud and Kubernetes is to allow developer teams, DevOps, and IT to be more agile, flexible, and resilient. However, this is a paradigm shift with many implications that may be hard for IT and security teams to address. In the cloud, the default is public. In the legacy datacenter world, the default was private, and IT or security would need to grant access. In the public cloud, IT or security governs access. The default premise of cloud, going back to Jeff Bezoss policies at AWS, is to make services, APIs, storage, computing, and networking accessible to anyone with a credit card. In the cloud, therefore, the default for a service is exposed to the world. In the traditional datacenter and legacy networking world, a service must be configured to be exposed.

This paradigm shift injects a new layer of complexity into security and can lead to configuration mistakes, even for cloud-native companies. A developer may build a test application and load code onto it that communicates with other services out in the cloud or even opens an API to the public Internet. The developer may not realize that the cloud server the test application is on the same namespace and security groups as other key production assets. That test server might also be left open by mistake for days, becoming a pivot or jump point for a malicious actor. Another point to consider is that in the past, storage was physically attached to networks and segregated from public access. To access data contained in that storage, you had to go through the server that was attached to it. Cloud computing broke that paradigm and allowed the easy storage of data in object stores and other online storage buckets. In the cloud, developers and even security teams often store data in public cloud storage buckets without properly configuring the buckets to secure access to them.

While physical data centers are somewhat obscured and blocked from public access or even scans, cloud service providers operate using well-known blocks of public IP addresses. This is true even down to individual services. For example, the IP blocks used by Amazons S3 storage service are well documented and publicly shared on the Internet. Because malicious actors know the IP addresses, this makes running continuous probes of those blocks searching for vulnerabilities far less resource intensive and expensive. Attackers also know the default configurations of Kubernetes clusters and connecting APIs. They know the default security configurations of most server images deployed as part of the default public compute cloud catalogs, as well as what ports are protected and opened by default in commonly deployed public cloud Web Application Firewalls. The upshot of all this? We face the opposing trends of operating and security infrastructure being made more complicated by the shift to the cloud, while at the same time identifying attack targets is becoming simpler.

The days of firewalling the data center to guard infrastructure are long gone. Many organizations maintain a global firewall in front of their infrastructure. These firewalls are necessarily porous due to the growing number of APIs and services that must connect to the outside world. In the cloud, the initial approach was to create security groups. Critical processes, services, and instances were placed inside more security groups. Access controls were applied on a per-group basis, associated with identity providers and authentication systems. Security groups are still necessary but insufficient to handle the cloud infrastructures complexity.

The answer is defense-in-depth. Security teams put in place more defense technologies, protecting data assets and applications in multiple ways. APIs are guarded by API gateways. Kubernetes clusters are guarded by specialized Web Application Firewalls and Ingress Controllers. SecDevOps teams mandate smaller, more lightweight firewalls in front of every public service or API. Application security teams require that SAST and SCA scans be run on any code iterations. Cloud providers add technology to ensure that cloud services, such as storage buckets, are properly secured. Endpoint detection and response is mandatory for all devices interacting with enterprise and cloud assets. Security is also placed in the content delivery network (CDN), extending web firewalls and denial of service (DoS) protection further away from core app servers to intercept attacks further upstream. These layered systems require proper configuration and managementa never-ending task.

Complexity increases the probability for mistakes. This complexity also provides malicious actors with potential opportunities to hide and attack. The high degrees of complexity are precisely what hackers use and abuse to get their way. An enterprise may have multiple directories that maintain user permissions, and an admin forgets to update one of them. There may be five valid authentication methods and the last method is the weakest. This will be the one malicious actors invariably choose to exploit. While 90% of development use cases and user requirements are satisfied with the standard catalog of infrastructure and devices, the remaining 10% of non-standard use cases will be the last to be updated and will likely present the best opportunities for exploits. Complexity creeps up on CISOs one exception at a time, one additional service or software or SaaS tool at a time.

So what can security teams do? According to recommendations from leading security agencies like the Cybersecurity and Infrastructure Security Agency (CISA), organizations must begin to invest in automated, continuous security validation to keep up. Unlike an annual penetration test by a third party, organizations must continually evaluate their security control stack. This means performing adversary simulations to test that the defensive controls are working correctly to detect, log, and stop attacks. This continuous test also helps organizations identify those temporary resources that may have been brought up and not protected correctly. Security teams should also make sure they do not limit themselves to external attack surface validation only. Any network can become an entry, exit, or pivot point for malicious actors to use.

Connect with a SafeBreach cybersecurity expert or request a demo of our advanced platform today to see what continuous security validationpowered by breach and attack simulation (BAS)can do for you.

Go here to see the original:
The New Frontiers of Cybersecurity Exponential Increase in Complexity - Security Boulevard

Read More..

IBM Furthers Flexibility, Sustainability and Security within the Data … – IBM Newsroom

ARMONK, N.Y., April 4, 2023 /PRNewswire/ -- IBM (NYSE: IBM)today unveiled new single frame and rack mount configurations of IBM z16and IBM LinuxONE 4, expanding their capabilities to a broader range of data center environments. Based on IBM's Telum processor, the new options are designed with sustainability in mind for highly efficient data centers, helping clients adapt to a digitized economy and ongoing global uncertainty.

Introduced in April 2022, the IBM z16 multi frame has helped transform industries with real-time AI inferencing at scale and quantum-safe cryptography. IBM LinuxONE Emperor 4, launched in September 2022, features capabilities that can reduce both energy consumption and data center floor space while delivering the scale, performance and security that clients need.The new single frame and rack mount configurations expand client infrastructure choices and help bring these benefits to data center environments where space, sustainability and standardization are paramount.

Rack Mount in Use

"IBM remains at the forefront of innovation to help clients weather storms generated by an ever-changing market," said Ross Mauri, General Manager, IBM zSystems and LinuxONE. "We're protecting clients' investments in existing infrastructure while helping them to innovate with AI and quantum-safe technologies. These new options let companies of all sizes seamlessly co-locate IBM z16 and LinuxONE Rockhopper 4 with distributed infrastructure, bringing exciting capabilities to those environments."

Designed for today's changing IT environment to enable new use cases

Organizations in every industry are balancing an increasing number of challenges to deliver integrated digital services. According to a recent IBM Transformation Index report, among those surveyed, security, managing complex environments and regulatory compliance were cited as challenges to integrating workloads in a hybrid cloud. These challenges can be compounded by more stringent environmental regulations and continuously rising costs.

"We have seen immense value from utilizing the IBM z16 platform in a hybrid cloud environment," said Bo Gebbie, president, Evolving Solutions. "Leveraging these very secure systems for high volume transactional workloads, combined with cloud-native technologies, has enabled greater levels of agility and cost optimization for both our clients' businesses and our own."

The new IBM z16 and LinuxONE 4 offerings are built for the modern data center to help optimize flexibility and sustainability, with capabilities for partition-level power monitoring and additional environmental metrics. For example, consolidating Linux workloads on an IBM LinuxONE Rockhopper 4 instead of running them on compared x86 servers with similar conditions and location can reduce energy consumption by 75 percent and space by 67 percent.1These new configurations are engineered to deliver the same hallmark IBM security and transaction processing at scale.

Designed and tested to the same internal qualifications as the IBM z16 high availability portfolio2, the new rack-optimized footprint is designed for use with client-owned, standard 19-inch racks and power distribution units. This new footprint opens opportunities to include systems in distributed environments with other servers, storage, SAN and switches in one rack, designed to optimize both co-location and latency for complex computing, such as training AI models.

Installing these configurations in the data center can help create a new class of use cases, including:

Securing data on the industry's most available systems3

For critical industries, like healthcare, financial services, government and insurance, a secure, available IT environment is key to delivering high quality service to customers. IBM z16 and LinuxONE 4 are engineered to provide the highest levels of reliability in the industry, 99.99999% availability to support mission-critical workloads as part of a hybrid cloud strategy. These high availability levels help companies maintain consumer access to bank accounts, medical records and personal data. Emerging threats require protection, and the new configurations offer security capabilities that include confidential computing, centralized key management and quantum-safe cryptography to help thwart bad actors planning to "harvest now, decrypt later."

"IBM z16 and LinuxONE systems are known for security, resiliency and transaction processing at scale," said Matt Eastwood, SVP, WW Research, IDC. "Clients can now access the same security and resiliency standards in new environments with the single frame and rack mount configurations, giving them flexibility in the data center. Importantly, this also opens up more business opportunity for partners who will be able to reach an expanded audience by integrating IBM zSystems and LinuxONE capabilities to their existing footprints."

With the IBM Ecosystem of zSystems ISV partners, IBM is working to address compliance and cybersecurity. For clients that run data serving, core banking and digital assets workloads, an optimal compliance and security posture is key to protecting sensitive personal data and existing technology investments.

"High processing speed and artificial intelligence are key to moving organizations forward," said Adi Hazan, director ofAnalycat. "IBM zSystems and LinuxONE added the security and power that we needed to address new clients, use cases and business benefits. The native speed of our AI on this platform was amazing and we are excited to introduce the IBM LinuxONE offerings to our clients with large workloads to consolidate and achieve corporate sustainability goals."

IBM Business Partners can learn more about the skills required to install, deploy, service and resell single frame and rack mount configurations in this blog.

Complementary Technology Lifecycle Support Services

With the new IBM LinuxONE Rockhopper 4 servers, IBM will offer IBM LinuxONE Expert Care. IBM Expert Care integrates and prepackages hardware and software support services into a tiered support model, helping organizations to choose the right fit of services. This support for LinuxONE Rockhopper 4 will offer enhanced value to clients with predictable maintenance costs and reduced deployment and operating risk.

The new IBM z16 and LinuxONE 4 single frame and rack mount options, supported by LinuxONE Expert Care, will be generally available globally[4] from IBM and certified business partners beginning on May 17, 2023. To learn more:

About IBMIBM is a leading global hybrid cloud and AI, and business services provider, helping clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries.Nearly 3,800 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM's hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently, and securely.IBM's breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and business services deliver open and flexible options to our clients.All of this is backed by IBM's legendary commitment to trust, transparency, responsibility, inclusivity, and service. For more information, visitwww.ibm.com

Media Contact:Ashley Petersonashley.peterson@ibm.com

1 DISCLAIMER: Compared IBM Machine Type 3932 Max 68 model consisting of a CPC drawer and an I/O drawer to support network and external storage with 68 IFLs and 7 TB of memory in 1 frame versus compared 36 x86 servers (2 Skylake Xeon Gold Chips, 40 Cores) with a total of 1440 cores. IBM Machine Type 3932 Max 68 model power consumption was measured on systems and confirmed using the IBM Power estimator for the IBM Machine Type 3932 Max 68 model configuration. x86 power values were based on Feb. 2023 IDC QPI power values and reduced to 55% based on measurements of x86 servers by IBM and observed values in the field. The x86 server compared to uses approximately .6083 KWhr, 55% of IDC QPI system watts value. Savings assumes the Worldwide Data Center Power Utilization Effectiveness (PUE) factor of 1.55 to calculate the additional power needed for cooling. PUE is based on Uptime Institute 2022 Global Data Center Survey (https://uptimeinstitute.com/resources/research-and-reports/uptime-institute-global-data-center-survey-results-2022). x86 system space calculations require 3 racks. Results may vary based on client-specific usage and location.2 DISCLAIMER: All the IBM z16 Rack Mount components are tested via same process requirements as the IBM z16 traditional Single Frame components. Comprehensive testing includes a wide range of voltage, frequency, temperature testing.3 Source: Information Technology Intelligence Consulting Corp. (ITIC). 2022. Global Server Hardware, Server OS Reliability Survey. https://www.ibm.com/downloads/cas/BGARGJRZ4 Check local availability for rack mount here.

SOURCE IBM

Excerpt from:
IBM Furthers Flexibility, Sustainability and Security within the Data ... - IBM Newsroom

Read More..

Ofcom concerned about Microsoft and Amazon domination of cloud market – Yahoo Finance UK

Ofcom could call in the competition regulator after finding concerns in the cloud services market, a backbone of the online world which is dominated by two companies.

The telecoms regulator proposed on Wednesday that the Competition and Markets Authority open its own probe into the sector amid concerns customers find barriers in their way when trying to switch suppliers.

The cloud space is dominated by two players, Amazon and Microsoft, which together hold an approximate 60%-70% market share.

Ofcom said it was particularly concerned about the two companies practices because of their dominant position.

Millions of people and businesses have come to rely on cloud computing in recent years.

The cloud loosely refers to a series of massive servers around the world which users can tap into to store photographs or emails, or run software from.

Ofcom said there was still competition in the sector, with innovative products and discounts offered to new customers.

However, it was concerned for customers trying to move from one cloud supplier to another.

The massive suppliers charge significantly higher fees than smaller providers to move data out of the cloud and to another companys servers, Ofcom said.

Users might also struggle to use more than one companys services at the same time because the leading firms prevent some of their services working effectively alongside those from other suppliers.

There is a risk that the features we have identified could lead the market to concentrate further towards the market leaders, Ofcom said.

It said the Competition and Markets Authority would be best-placed to investigate this further.

Fergal Farragher, the Ofcom director who led its study into the sector, said: Weve done a deep dive into the digital backbone of our economy and uncovered some concerning practices, including by some of the biggest tech firms in the world.

Story continues

High barriers to switching are already harming competition in what is a fast-growing market.

We think more in-depth scrutiny is needed, to make sure its working well for people and businesses who rely on these services.

Ofcom said it would take feedback on its findings until mid May and would make its final decision in October.

Microsoft said: We look forward to continuing our engagement with Ofcom on their cloud services market study.

We remain committed to ensuring the UK cloud industry stays highly competitive, and to supporting the transformative potential of cloud technologies to help accelerate growth across the UK economy.

Amazon Web Services said: These are interim findings and AWS will continue to work with Ofcom ahead of the publication of its final report.

The UK has a thriving and diverse IT industry with customers able to choose between a wide variety of IT providers.

At AWS, we design our cloud services to give customers the freedom to build the solution that is right for them, with the technology of their choice.

This has driven increased competition across a range of sectors in the UK economy by broadening access to innovative, highly secure, and scalable IT services.

View post:
Ofcom concerned about Microsoft and Amazon domination of cloud market - Yahoo Finance UK

Read More..

Cloud WAF Pricing: All You Need to Know – Security Boulevard

Choosing the right Cloud WAF pricing model is like finding the perfect pair of shoes: its all about comfort, fit, and style for your organizations needs.

In this guide, well help you navigate the world of Cloud WAF pricing, exploring different options and factors so that you can find the perfect fit for your web application security requirements.

For those still evaluating Cloud vs. on-prem WAF, heres a detailed article onwhy cloud WAFs are better than on-premise WAFs.

WAFs provided by public clouds such as AWS and Azure typically price on a pay-as-you-go model.

On the other hand, specialized WAF providers such as Indusface, Akamai, and Cloudflare offer a subscription model.

There are many pay-as-you-go features offered even by subscription providers. The value addition that specialized WAFs provide is the availability of core rules that provide by-default protection against OWASP Top 10 vulnerabilities.

In public Cloud WAFs, youll typically need to either:

That said, several pay-as-you-go features are provided even by specialized WAF providers.

In the next section, we will cover all the factors that affect WAF pricing.

This is the first parameter that affects pricing. Even within this, there are two models:

a. Domain: One license for the domain, and this includes subdomains too. This model is typically used when similar applications are on different sub-domains, for example, qa.acme.com vs. acme.com.

While you can use this model for sub-domains that host different applications, the possibility of false positives is more as the same rule set is applied on multiple applications.

b. Application: Since every application differs, this model helps get fine-grained protection and custom rules. Usually, the license depends on a per-website model or a Fully Qualified Domain Name (FQDN).

For example, youll typically be charged one license for http://www.acme.com and one more for abc.acme.com.

Cloud WAFs act as filters before traffic hit your origin server. All the traffic passed over to your origin servers is billed as the bandwidth cost.

Here also, there are three models:

a. Requests: The pricing plan might have a set cost for a specific number of requests each month, plus extra charges for any extra requests over the set limit. Another option is that the pricing depends only on the total number of requests, so customers pay for what they use.

b. Peak Mbps : Some WAF companies use a peak Mbps (megabits per second) pricing plan. They charge customers based on the highest bandwidth (mainly in the 95th percentile) used in a set time, like a month. This model looks at the most traffic the WAF handles, not the total requests or data moved. Its important for organizations with changing traffic or different bandwidth needs.

c. Bandwidth: Some WAFs use a pricing plan based on the bandwidth over the wire. This includes both the request and response data. They charge customers for data moving through the system. This pricing model is easy to understand and works well for many organizations.

As discussed earlier, depending on the WAF provider, you may get charged for the following features:

a. DDoS & Bot Mitigation:This is probably the single most expensive feature addition. As per the application, the subscription to this feature alone typically costs a couple of thousand dollars per month in the subscription. In addition, some vendors even bill you for the bandwidth in case of a DDoS attack. In the case of Indusface AppTrana,DDoS is bundled as part of the monthly subscription plans.

b. API Security: Most popular WAFs now include an API security solution. This category is now called WAAP. However, this is generally priced as an add-on as API security needs special configuration, especially to create a positive security model. The AppTrana WAAP, by default, protects all APIs that are part of the same FQDN.See more details here.

c. Analytics: Getting analytics on the kind of attacks blocked is also, a big add-on, especially if you just get one WAF license and use that to protect multiple applications such as acme.com, payroll.acme.com, crm.acme.com along with acme.com. As these are all different applications, storing attack logs and analytics on these logs would be extremely expensive.

Hence, most WAF providers dont provide access on a single license. At Indusface, we often suggest taking additional licenses for critical applications requiring attack logs and analysis.

d. DAST scanners: DAST and WAF are not integrated and separate products in most organizations. This is a lost opportunity, as vulnerabilities found on a DAST could quickly be patched on the WAF. This process is called virtual patching, and it buys developers time before they patch these vulnerabilities on code.

At Indusface, we bundle DAST scanner Indusface WAS as part of the AppTrana WAAP. You save costs on subscriptions and integrate DAST and virtual patching into CI/CD pipelines so that security is handled even in an agile development cycle.

e. CDN: Since WAAP providers have some pricing component dependent on data transfer, enabling a CDN will lead to significant cost savings. In most WAFs, this is an add-on.

f. Support:24X7 phone, email, and chat support is yet another feature that most WAF vendors add only in enterprise contracts. At Indusface, you will get enterprise support at SMB pricing; see the WAAP pricing page here.

Managed services play a big part in application security, especially as threats evolve. For example,200+ application-level critical/high zero-day vulnerabilitiesare discovered monthly. Compute power is so cheap that a one-hour DDoS attack can be bought for $5, and this will get cheaper.

To combat all of this, any WAAP/WAF solution needs to evolve. While most Cloud WAFs keep the software updated, a key part of defense is the rule set, and unless the security teams have highly skilled security engineers, they wouldnt be able to touch any of the rule sets.

The other problem is that even if rules are sent as patches, the onus is on the application team to monitor for false positives and ensure 99.99% availability while preventing downtime. Often, application teams do not apply these patches; worse, most WAFs are perpetually in log mode, as in they dont block any attacks!

Then theres the problem of DDoS, which is a big ransomware threat, and sophisticated actions such as rate limits, Tarpitting, CAPTCHA, and blocks need careful monitoring as there is a high possibility of false positives.

So managed services are essentially an extended SOC/IT team to help with the following:

While every vendor can promise managed services, evaluating the SLAs with which they operate is critical. We highly recommend checking the support response times and SLAs, uptime guarantee, and latency with the vendor.

At Indusface, we are proud to ensure a 24-hour SLA on virtual patches for critical vulnerabilities.You can find more details on the SLA here.

Heres a step-by-step framework to help people choose a WAF based on pricing:

1. Identify your organizations requirements:

2. Research WAF providers

3. Analyse pricing models:

4. Evaluate included features and additional services

5. Assess data center locations and regions

6. Compare technical support and SLAs

7. Calculate the total cost of ownership (TCO)

8. Rank various WAF providers

9. Run product trials

By following this framework, you can systematically evaluate and compare different WAFs based on pricing, features, support, and other factors, ultimately selecting the most suitable and cost-effective solution for your organization.

In conclusion, selecting the right Cloud WAF is crucial for safeguarding your web applications and maintaining a strong security posture. A thorough understanding of Cloud WAF pricing, features, and service level agreements will enable your organization to make informed decisions, ensuring you invest in a solution that fits your budget and provides robust protection against ever-evolving cyber threats.

Stay tuned for more relevant and interesting security updates. Follow Indusface onFacebook,Twitter, andLinkedIn

The post Cloud WAF Pricing: All You Need to Know appeared first on Indusface.

*** This is a Security Bloggers Network syndicated blog from Indusface authored by Indusface. Read the original post at: https://www.indusface.com/blog/cloud-waf-pricing-all-you-need-to-know/

See the rest here:
Cloud WAF Pricing: All You Need to Know - Security Boulevard

Read More..

iExec RLC: Unlocking New Possibilities in the Cloud Computing … – The Southern Maryland Chronicle

In a world where cloud computing is becoming increasingly popular, iExec RLC offers a unique solution for businesses looking to make the most out of their resources. By providing access to distributed applications and services, iExec RLC unlocks new possibilities in the cloud computing space.

The platform allows users to securely access and deploy any application or service from anywhere in the world without having to worry about data security or reliability. It also eliminates the need for complex infrastructure setup and maintenance, as all applications and services are hosted on an Ethereum-based blockchain network. As such, businesses can benefit from reduced costs associated with theiExec RLC Priceand hosting fees while also having the advantage of increased flexibility when scaling their solutions according to their needs.

Sign up to receive our free Daily Digest newsletter, in your inbox each morning.

Furthermore, iExec RLC gives users complete control over their data privacy settings, allowing them to decide who can access what information they store on the platform. All of these features make iExec RLC an attractive option for businesses looking for a reliable and secure way to unlock new possibilities in the cloud computing space.

iExec RLC (RLC stands for Run on Lots of Computers) is a decentralized cloud computing platform that enables users to rent out their computing resources in exchange for cryptocurrency. It was created by the French startupiExec, which has been developing blockchain-based solutions since 2016. The platform allows users to access distributed applications and services without owning or managing any hardware. Instead, they can rent out the necessary computing power from other users on the network. This makes it easier and more cost-effective for developers to create and deploy distributed applications and for businesses to access powerful computing resources without investing in expensive hardware, allowing them to tap into new digital markets like theMetaverse, accessing a new market of digital consumers. Additionally, iExec provides a marketplace where developers can list their applications and services, allowing them to monetize their work while giving users easy access to high-quality products.

To buy and sell iExec RLC tokens, you will need to use a cryptocurrency exchange. First, you will need to create an account on the exchange platform of your choice. Once your account is created, you can deposit funds using various payment methods such as bank transfer or credit card. After your funds have been deposited, you can then search for the iExec RLC token and place an order to buy or sell it at the current market price. Once your order has been filled, you will be able to withdraw your tokens from the exchange into a secure wallet that supports them.

iExec RLC uses distributed ledger technology (DLT) to ensure the integrity of its network by providing an immutable record of all transactions on the platform. This makes it an ideal solution for companies looking for a secure way to store sensitive information such as customer data or financial records. iExec also offers a range of advanced analytic capabilities which allow businesses to gain valuable insights into their operations and make better decisions based on real-time data analysis. All user data is encrypted using industry-standard encryption algorithms, and all communication between servers and the customers device is done over a secure HTTPS connection. Two-factor authentication has also been implemented for added security, so you can be sure that only you have access to your account. Additionally, the company regularly monitors its systems for any suspicious activity or potential threats. By combining these various security measures, iExec RLC ensures that its users data remains safe and secure at all times.

Show your love for Southern Maryland by powering authoritative, in-depth reporting about your community, and keeping access free for neighbors who need it.

Like Loading...

Related

View post:
iExec RLC: Unlocking New Possibilities in the Cloud Computing ... - The Southern Maryland Chronicle

Read More..