Page 870«..1020..869870871872..880890..»

Lessons From the Tech World that Apply to Accounting Firms – CPAPracticeAdvisor.com

By Davis Bell.

I love talking to people who work in different industries than the one in which I work tech because they see the world differently, which enables me to learn new things from them that I can apply to my work.

One example of this from the accounting profession: Matthew May, one of the co-founders of Acuity, a national accounting firm, recently stated during a panel discussion at an accounting conference that his firm had learned a lot of the processes for lead generation and prospect data mining from a software company located near his firm. Matthews firm borrowed the tactics of this software company to successfully grow its database of leads

I thought it might be useful for folks in the accounting industry to hear a few lessons from the tech world that can be applied to accounting:

How to attract and retain top talent in a labor constrained market:

The accounting industry is facing a significant shortfall of qualified workers. This enables employees to be choosy about where they work, which means in turn theyre after more than just competitive pay and benefits. For managers who began their careers a decade or more ago, this requires changing their paradigm of what employees want. This is a challenge we in the tech industry have faced for quite some time, especially for technical roles like engineering and product management. Some tech companies thought the answer was simply perks Massages! Ping pong! Bring your dog to work! But in my experience, what workers actually want are career development, opportunities to learn and upskill, talented co-workers, hard problems to solve, and a culture of openness and collaboration. Time spent talking to employees and recruits to figure out what matters to them and how you can deliver those things is time well spent.

Quantitative data is key to success.

Clear and real-time analytics are essential to the health of a tech company, accounting firms, and really any business. Without the right numbers, its impossible to see exactly what is happening from an operations and workflow standpoint in any business and that is no exception in your firm or in a tech company.

Two types of metrics to consider from the technology space are the development of a common set of OKRs (objectives and key results) plus key performance indicators (KPIs) which should be used to drive the strategic vision of the business and/or accounting firm.

One of the reasons that quantitative data is so vital is that it removes the ambiguity of the actions necessary. For example, you may think theres a problem based on qualitative data (client issues, staff complaints, and workflow misfires) but you also need to back up assumptions with actual objective metrics as well. In addition, having numerical objectives and key results will make you much more focused. The same goes for KPIs.

If you need some guidance on which KPIs to track, consider the following: client profitability, client retention, new client growth rate, job profitability, employee productivity, and engagement or service profitability.

Focus on the customers you want.

When developing apps and technology solutions, the most successful companies focus on the ideal client profile (ICP). Its important to drill down on the very specific characteristics of your ICP and to cater towards them, rather than trying to be all things to all people and get distracted by people who arent your ICP.

By understanding and aligning your product and services with your ICP, you can allow your team to truly create exemplary client experiences and uncover new ways to make your firm better. Regularly asking for feedback and being open to the good, the bad, and the ugly is another best tech company practice you can use.

Simplify and streamline your tech stack.

From my perspective as the leader of a technology company, this may seem counterintuitive, but when it comes to tech I do believe less is more. I liken appropriate technology usage in a business as a road with a cliff on both sides.

On one side of the road is a cliff formed in response to highly manual processes that do not leverage the powerful technology now available to accounting firms and other businesses. On the other side of the road the cliff is formed when a company has too many solutions. Firms on this side of the road are spending a ton of time and money on technology for the sake of it without getting the full return on investment out of it. If you go too far in one direction or the other it becomes a disaster.

The important takeaway here is to practice proactive technology hygiene where you can evaluate what is working and what is not, then take action to make changes as you see fit and match your tech stack to the objectives you are trying to meet.

-

Following these four lessons from tech companies when it comes to your firm is a positive step toward building an optimum practice playbook. Just like a tech company, accounting firms struggle with staffing, streamlining tech, establishing OKRs and tracking formal KPIs, all while trying to deliver a seamless and secure client experience. Its a tall order, but the more we can learn from each other and leverage different business strategies the stronger well all be in the future.

====

Davis Bell is the CEO of, Canopy, a leading cloud-based practice management platform for tax professionals. He has held leadership roles in strategy, operations, and finance at a succession of SaaS companies. Hes passionate about building innovative software that delights users.

Read more:

Lessons From the Tech World that Apply to Accounting Firms - CPAPracticeAdvisor.com

Read More..

Cloud Computing: DOD Needs to Improve Tracking of Data User Fees – Government Accountability Office

What GAO Found

Data user fees (ingress and egress) are related to how users transfer and access data in a cloud environment. Data ingress is transferring data into the cloud and data egress is transferring data from the cloud. While data ingress is often free to users, cloud service providers generally charge data egress fees for transferring data out of storage (see figure).

Figure: Transfer of Data into and out of the Cloud

The Department of Defense (DOD) has begun to consider data egress fees when procuring and implementing cloud services. The department's recent contract negotiations with commercial providers resulted in discounts on data fees, including data egress fees. Vendor lock-in can happen in cloud computing when the cost of moving to a new provider is so high that a user stays with their incumbent provider. However, DOD officials stated that egress fees had not been a primary cause for vendor lock-in. These officials added that other factors could cause vendor lock-ins, including a lack of specific skills by government staff, or the reliance on cloud services unique to a specific cloud provider.

DOD has mechanisms that could mitigate the impact data egress fees could have on DOD as it procures and implements cloud services across the department. DOD officials reported that data egress fees account for less than 1 percent of known cloud expenditures. However, the department does not have the capability to track and report on these fees. In addition, DOD's contract-specific tools do not track cloud expenditures, including data egress fees department-wide. DOD officials identified improved insight into cloud expenditures through recent department-wide contracts such as the Joint Warfighting Cloud Capabilitya cloud contract with four commercial service providersand other tools. However, DOD does not yet have a plan or time frame for adopting a tool that tracks data egress fees. Until DOD acquires and implements such a tool, it will continue to lack full insight into the impact of egress fees.

Cloud computing enables agencies to have on-demand access to shared computing resources. The costs of doing so are often lower than if the agencies were maintaining the resources. In fiscal year 2022, major federal agencies obligated about $7 billion for cloud computing contracts, including approximately $3 billion by DOD. Cloud service providers charge users fees for transferring data out of the cloudknown as data egress fees. Committee reports from the Senate and House Armed Services Committees accompanying the James M. Inhofe National Defense Authorization Act for Fiscal Year 2023 include provisions for GAO to review cloud data egress fees at DOD, including their effects on vendor lock-in.

This report determines the extent to which DOD (1) considered data egress fees when procuring and implementing cloud services and their potential for vendor lock-in and (2) mitigated the impact of data egress fees and tracked and reported on them. To assess DOD's cloud data egress fees, GAO analyzed relevant department guidance on cloud services and the tracking and reporting of cloud expenditures. It also reviewed supporting department documentation on cost reporting and tracking. In addition, GAO interviewed DOD officials.

Originally posted here:
Cloud Computing: DOD Needs to Improve Tracking of Data User Fees - Government Accountability Office

Read More..

Librem 11 tablet sets new standard for privacy and security with … – Help Net Security

Purism introduced the new Librem 11 tablet running secure PureBoot and Linux kernel-based PureOS. Librem 11 is made for individuals, organizations, government agencies, law enforcement agencies, and businesses that need security and privacy with powerful portability.

The Librem 11 tablet runs PureBoot and PureOS, offering security and privacy features that include:

Secure boot process. The Librem 11 powers on with PureBoot for the most secure boot process on the market.

Linux kernel-based OS. The Librem 11 runs PureOS, an open-source, peer-reviewable, and fully verifiable operating system.

No surveillance or data mining. Purism avoids surveillance and data mining business practices, ensuring all your personal identifying information is kept under your control.

True OS convergence. PureOS on the Librem 11 demonstrates real convergence, where the tablet becomes more than just a tablet. It becomes a full-featured computer that can act like a desktop when connected to a monitor, keyboard, mouse, or even a laptop connected to the included keyboard sleeve.

Secure supply chain. Purism offers Anti-Interdiction services for Librem 11, a service for those facing extreme security threats or just want peace of mind that their device has not been tampered with before, during, or after the manufacturing and shipping process, ensuring the most secure tablet on the market today.

Malware protection. The Librem 11 tablet works with the Librem Key used to identify tampering when used in combination with PureBoot cryptographically.

Librem 11 offers powerful features that include:

Originally posted here:

Librem 11 tablet sets new standard for privacy and security with ... - Help Net Security

Read More..

Do backups belong in the cloud or on-premise? – IT-Online

Give a nod to marketers. When it comes to the cloud, they have helped shape how we think of this technology, almost as a magic solution to any server or data hosting scenario. Of course, the cloud often delivers on such claims. In many cases, it does provide a better alternative to traditional systems.

But cloud marketing did too good a job. The cloud doesnt solve every IT problem, says Bryce Tatham, GM: business development at Sithabile Technology Services.

The cloud era has been great for many reasons, but its also enlightened us about the complex and bespoke side of technology. By that, I mean the principle that you have to look at business requirements first. If you do that wrong, technology becomes very expensive and underwhelming. For a while, the cloud looked like a way to sidestep those issues. But instead, it reminds us that nuance is very important, because otherwise your problems actually become bigger.

Backups are a prime example. On paper, you should throw all your backups into the cloud. Its cheaper, more accessible, and you dont need to own or run the underlying infrastructure.

But reality disagrees: cloud data costs can skyrocket, especially when moving data away from a public cloud; accessibility is dampened by latency; the higher reliance on external networks creates cybercrime risks; and, sometimes, you want to have a hand in the systems that run your data.

Of course, these risks can be managed. Cloud backups are great. But on-premises backups also have their advantages. It depends on what the business needs.

Cloud versus on-premise

Before comparing the two options, its important to qualify them. Cloud backups typically mean using a public cloud provider, and on-premise backups can be at a business but also in a third-party data centre, often using private cloud technology controlled by the business.

The substantial difference between cloud and on-premise is not about new versus traditional, says Tatham. They tend to use the same modern backup technologies. The difference is about the backup strategy, cost, access, skills, and legislation.

These five topics provide the best way to grasp backup options.

* Backup strategy Not all data is the same. Some can languish in archives while other data needs to be always available. This difference informs the backup strategy. In some circumstances, on-premise recoveries are faster, but the cloud is faster in other events. For example, a remote site with its own servers is likely better than one relying on the cloud. However, data used by employee SaaS applications works best with cloud backups.

* Cost How much to pay for backups again depends on the type of data and its access requirements. Cloud storage can seem cheaper, but its costs can become complex and hard to control. Nor is it best practice to store cold archives of data in the public cloud its often cheaper to use local tape storage. Yet on-premise storage has additional skills and infrastructure costs, though those can be balanced through the value of access.

* Access Data is not much use if there isnt timely access. Again, there is no clear winner about which option is faster. Moving large volumes of data to or from a public cloud data centre abroad can cause delays, yet using local public cloud servers is faster. That being said, one must ensure the correct data is on the right servers. On-premise and private cloud systems dont have this issue, though that flexibility often has higher infrastructure costs.

* Skills Since public cloud systems are from third parties, most companies that use those for backups work with partners and their skilled employeesa significant cost saving, though it also means heavy reliance on third parties. On-premise systems require in-house skills, which can be expensive. Again, it depends on the business needs. Smaller companies tend to prefer the cloud, while larger companies blend the two options.

* Legislation Regulations govern some data types to protect personal information, safeguard tax records, or cover various other legal requirements based on a companys size, sector and geographic operations. On-premise data backups provide a level of control to manage legal risks. Public cloud services can cater for legislation, but its a complicated exercise and requires close reading of service contracts covering liabilities.

Companies often opt for a hybrid backup strategy that blends different public cloud and on-premise options. This is the mature approach: those organisations looked at their requirements and chose appropriate backup locations for different data needs.

Related

Original post:
Do backups belong in the cloud or on-premise? - IT-Online

Read More..

Review of visual analytics methods for food safety risks | npj Science … – Nature.com

Jacxsens, L., Uyttendaele, M. & De Meulenaer B. Challenges in risk assessment: quantitative risk assessment. Procedia food Sci. 6, 2330 (2016).

Article Google Scholar

Wang, X., Bouzembrak, Y., Lansink, A. O., Van der, H. J. & Fels-Klerx Application of machine learning to the monitoring and prediction of food safety: A review. Compr. Rev. Food Sci. Food Safe. 21, 416434 (2022).

Article Google Scholar

Jin, C. Y. et al. Big Data in food safety-A review. Curr. Opin. Food Sci. 36, 2432 (2020).

Article Google Scholar

Zhou, L., Zhang, C., Liu, F., Qiu, Z. J. & He, Y. Application of Deep Learning in Food: A Review. Compr. Rev. Food Sci. Food Safe. 18, 17931811 (2019).

Article Google Scholar

Chen, W., Zhao, Y., Zhang, S. & Lu, A. D. Introduction to Visualization (Higher Education Press, Beijing, 2020).

Munzner, T. Visualization Analysis and Design (CRC Press, Boca Raton, 2014).

Chen, Y., Zhang, Q. H., Guan, Z. L., Zhao, Y. & Chen, W. GEMvis: a visual analysis method for the comparison and refinement of graph embedding models. Vis. Comp 38, 34493462 (2022).

Article Google Scholar

Wu, C. X. et al. VizOPTICS: Getting insights into OPTICS via interactive visual analysis. Comput. Electr. Eng. 107, 108624 (2023).

Article Google Scholar

Goyal, K., Kumar, P. & Verma, K. Food Adulteration Detection using Artificial Intelligence: A Systematic Review. Arch. Comput. Methods Eng. 29, 397426 (2022).

Article Google Scholar

Deng, X., Cao, S. & Horn, A. L. Emerging applications of machine learning in food safety. Annu. Rev. Food Sci. Technol. 12, 513538 (2021).

Article PubMed Google Scholar

Wheeler, N. E. Tracing outbreaks with machine learning. Nat. Rev. Microbiol. 17, 269 (2019).

Article CAS PubMed Google Scholar

Du, Y. & Guo, Y. C. Machine learning techniques and research framework in foodborne disease surveillance system. Food Control 131, 108448 (2022).

Article Google Scholar

Wu, Y. N. & Chen, J. S. Food safety monitoring and surveillance in China: past, present and future[J]. Food Control 90, 429439 (2018).

Article Google Scholar

Tao, D. D., Yang, P. K. & Feng, H. Utilization of text mining as a big data analysis tool for food science and nutrition. Compr. Rev. Food Sci. Food Saf. 19, 875894 (2020).

Article PubMed Google Scholar

Thomas, J. J. & Cook, K. A. Illuminating the Path: The Research and Development Agenda for Visual Analytics (Pacific Northwest National Lab, Richland (2005).

Joanes, D. N. & Gill, C. A. Comparing measures of sample skewness and kurtosis. J. R. Stat. Soc. 47, 183189 (1998).

Google Scholar

International Organization for Standardization (ISO) 22000 Food Safety Plain English Dictionary. http://praxiom.com/iso-22000-definitions.htm.

Marvin, H. J. P., Janssen, E. M., Bouzembrak, Y., Hendriksen, P. J. M. & Staats, M. Big data in food safety: An overview. Crit. Rev. Food Sci. 57, 22862295 (2016).

Article Google Scholar

Steinberger, R., Pouliquen, B. & Goot, E. V. D. An introduction to the Europe Media Monitor family of applications. Proceedings of the Special Interest Group on Information Retrieval 2009 workshop (Boston, United States, 2013).

De Mauro, A., Greco, M. & Grimaldi, M. What is big data? A consensual definition and a review of key research topics. AIP Conf. Proc. 1644, 97104 (2014).

Article Google Scholar

Rodgers, J. L. & Nicewander, W. A. Thirteen ways to look at the correlation coefficient. Am. Stat. 42, 5966 (1988).

Article Google Scholar

Zhang, J. R. et al. Bioavailability and soil-to-crop transfer of heavy metals in farmland soils: A case study in the Pearl River Delta, South China. Environ. Pollut. 235, 710719 (2018).

Article CAS PubMed Google Scholar

Sheng, Z. Probability Theory and Mathematical Statistics (High Education Press, Beijing, 2010).

Wu, W. et al. Successive projections algorithmmultivariable linear regression classifier for the detection of contaminants on chicken carcasses in hyperspectral images. J. Appl. Spectrosc. 84, 535541 (2017).

Article CAS Google Scholar

Agrawal, R. & Srikant, R. Fast algorithms for mining association rules. Proceedings of the 20th International Conference on Very Large Data Bases, San Francisco, CA, United States (1994).

Wu, X. D. et al. Top 10 algorithms in data mining. Knowl. Inf. Syst. 14, 137 (2008).

Article Google Scholar

Cazer, C. L. et al. Shared multidrug resistance patterns in chicken-associated Escherichia coli identified by association rule mining. Front. Microbiol. 10, 687 (2019).

Article PubMed PubMed Central Google Scholar

Wang, J. & Yue, H. Food safety pre-warning system based on data mining for a sustainable food supply chain. Food Control 73, 223229 (2017).

Article Google Scholar

Jacobsen, H. &, Tan, K. H. Improving food safety through data pattern discovery in a sensor-based monitoring system. Prod. Plan. Control 33, 111 (2021).

Wu, Y. N., Liu, P. & Chen, J. S. Food safety risk assessment in China: Past, present and future. Food Control 90, 212221 (2018).

Article Google Scholar

Chen, Y., Liu, Y., Chen, X. R. & Liu, R. J. Simulation and assessment method for pesticide residue pollution based on visual analysis techniques. Comput. Simula. 34, 347351 (2017).

Google Scholar

Soon, J. M., Davies, W. P., Chadd, S. A. & Baines, R. N. A Delphi-based approach to developing and validating a farm food safety risk assessment tool by experts. Expert Syst. Appl. 39, 83258336 (2012).

Article Google Scholar

Su, K. et al. Water quality assessment based on Nemerow pollution index method: A case study of Heilongtan reservoir in central Sichuan province, China. PLoS one 17, e0273305 (2022).

Article CAS PubMed PubMed Central Google Scholar

Fu, J. et al. Heavy metals concentrations characteristics and risk assessment of edible mushrooms. J. Chin. Inst. Food Sci. Tech. 19, 230237 (2019).

Google Scholar

Yu, Z. et al. Contamination and risk of heavy metals in soils and vegetables from zinc smelting area. China Environ. Sci. 39, 257273 (2019).

Google Scholar

Tanima, C. & Madhusweta, D. Sensory assessment of aromatic foods packed in developed starch based films using fuzzy logic. Intern. J. Food Stud. 4, 2948 (2015).

Article Google Scholar

Wei, Z. S., Ma, X. P., Zhan, P., Tian, H. L. & Li, K. X. Flavor quality assessment system of Xinjiang milk knots by using SOM neural network and the fuzzy AHP. Food Sci. Nutr. 8, 20882093 (2020).

Article PubMed PubMed Central Google Scholar

Chen, Y., Chen, X. R., Chang, Q. Y. & Fan, C. L. A multi-factor comprehensive method based on the AHP-E model for evaluating pesticide residue pollution. J. Agro-Environ. Sci. 38, 276283 (2019).

Google Scholar

Ma, B. et al. Risk early warning and control of food safety based on an improved analytic hierarchy process integrating quality control analysis method. Food Control 108, 106824 (2020).

Article CAS Google Scholar

Wang, X. F., Chen, Y. & Sun, Y. H. Comprehensive evaluation model of heavy metal pollution in meat products based on best-worst method and entropy method. Food Mach. 37, 8086 (2021).

Google Scholar

Ma, Y. J., Hou, Y. Y., Liu, Y. S. & Xue, Y. H. Research of food safety risk assessment methods based on big data. IEEE International Conference on Big Data Analysis (2016).

Han, Y. M., Cui, S. Y. & Geng, Z. Q. Food quality and safety risk assessment using a novel HMM method based on GRA. Food Control 105, 180189 (2019).

Article CAS Google Scholar

Gao, Y. N., Wang, W. Q. & Wang, J. X. A food safety risk prewarning model using LightGBM integrated with fuzzy hierarchy partition: a case study for meat products. Food Sci. 42, 197207 (2021).

Google Scholar

Wang, H. X., Cui, W. J., Guo, Y. C., Du, Y. & Zhou, Y. C. Machine learning prediction of foodborne disease pathogens: Algorithm development and validation study. JMIR Med. Inf. 9, e24924 (2021).

Article Google Scholar

Jensen, F. V. & Nielsen, T. D. Bayesian Networks and Decision Graphs (Springer, New York, 2007).

Achumba, I., Azzi, D., Ezebili, I. & Bersch, S. Approaches to Bayesian Network Model Construction (IAENG Transactions on Engineering Technologies, Springer, Dordrecht, 2013).

Sun, J., Sun, Z. & Chen, X. Fuzzy Bayesian network research on knowledge reasoning model of food safety control in China. J. Food, Agric. Environ. 11, 234243 (2013).

Google Scholar

Bouzembrak, Y., Camenzuli, L., Janssen, E. & Fels-Klerx, H. J. V. D. Application of Bayesian Networks in the development of herbs and spices sampling monitoring system. Food Control 83, 3844 (2018).

Article Google Scholar

Bouzembrak, Y. & Marvin, H. J. P. Impact of drivers of change, including climatic factors, on the occurrence of chemical food safety hazards in fruits and vegetables: A Bayesian Network approach. Food Control 97, 6776 (2019).

Article Google Scholar

Marvin, H. J. P. & Bouzembrak, Y. A system approach towards prediction of food safety hazards: Impact of climate and agrichemical use on the occurrence of food safety hazards. Agr. Syst. 178, 102760 (2020).

Article Google Scholar

Benitez, J. M. & Castro, J. L. Are artificial neural networks black boxes? IEEE T. Neural Net. 8, 11561164 (1997).

Article CAS Google Scholar

Guan, C. & Yang, Y. Research of extraction behavior of heavy metal Cd in tea based on backpropagation neural network. Food Sci. Nutr. 8, 10671074 (2020).

Article CAS PubMed PubMed Central Google Scholar

Deng, Y., Xiao, H. J., Xu, J. X. & Wang, H. Prediction model of PSO-BP neural network on coliform amount in special food. Saudi J. Biol. Sci. 26, 11541160 (2019).

Article PubMed PubMed Central Google Scholar

Wang, X. Y., Zuo, M., Xiao, K. J. & Liu, T. Data mining on food safety sampling inspection data based on BP neural network. J. Food Sci. Tech. 6, 8590 (2016).

Google Scholar

Bai, B. G., Zhu, H. L. & Fan, Q. X. Application research of BP neural network in dairy product quality and safety risk. China Dairy Ind. 48, 4245+57 (2020).

Google Scholar

Zhang, D. B., Xu, J. P., Xu, J. J. & Li, C. G. Model for food safety warning based on inspection data and BP neural network. Tran. CSAE 26, 221226 (2010).

Continue reading here:

Review of visual analytics methods for food safety risks | npj Science ... - Nature.com

Read More..

Huawei Cloud in Token2049: Fueling Web3 Advances with Key … – PR Newswire

SINGAPORE, Sept. 13, 2023 /PRNewswire/ -- From September 11 to 12, Huawei Cloud showcased staking node engine, confidential computing, andZK rollup at Token2049 in Singapore. In this event, Huawei Cloud discussed the future of the Web3 industry with top vendors, builders, and developers, and stated its commitment to accelerating Web3 innovation with technical breakthroughs.

Node Creation in Seconds, 99% Staking Effectiveness: A New Engine for Web3

Web3 is in the early stage of development and faces many technical challenges regarding performance and security of transactions on chains.On September 12, in the MetaEra Summit, Zhang Ziyi, Chief Architect of Huawei Cloud Blockchain, introduced Huawei Cloud's Ethereum staking node hosting service. With innovative algorithms, this service achieves up to 99% staking effectiveness and higher rewards. QingTian Enclave security framework supports environment and identity authentication and full-link data encryption and decryption. With this security framework, applications and data can run secured on Huawei Cloud Elastic Cloud Servers (ECSs). In addition, Huawei Cloud accelerates ZK rollup hardware innovation. With Huawei-developed XPU, architecture innovation, and algorithm optimization, the average confirmation time of ZK rollup transactions plummeted from hours to minutes.

Huawei Cloud QingTian Enclave Safeguards Transactions

Security has been significant to Web3. Currently, Web3 wallet applications still face severe security challenges when it comes to the storage and attack defense of wallet private keys. Blockchain Security Alliance Meetup, initiated by the Blockchain Security Alliance, is one of the major activities of Token2049 and has become the focus of the global blockchain security and Web3 ecosystem. Jia Xiaoqiang, Director of Huawei Cloud Virtualization Products, introduced Huawei Cloud's confidential computing solution and Web3 solution aiming to secure systems, applications, and sensitive data in wallet scenarios.

Jia Xiaoqiang said, "Huawei Cloud QingTian Enclave confidential computing solution is just like a safe for wallet private keys. It supports environment and identity authentication and full-link data encryption and decryption. Applications and data can be run with protection on Huawei Cloud Elastic Cloud Servers (ECSs). Unauthorized users and third parties are isolated to maintain transaction security of private keys and wallets." Huawei Cloud is committed to building a highly secure and available solution for Web3, and all the efforts turned into QingTian Enclave, an end-to-end solution securing the execution environment for applications and sensitive data through software and hardware.

IPFS Cloud Data Ecosystem with Tenfold Rewards

Decentralized storage is another key infrastructure of Web3. However, the industry shares the same headaches in migrating data to the cloud, reducing costs, and improving effectiveness. At the Fil Dev Summit on September 12, Bai Tao, Chief Cloud Storage Solution Architect of Huawei Cloud, shared InterPlanetary File System (IPFS) data service and elastic cloud deployment solution jointly developed by Huawei Cloud and partners. Huawei Cloud's cloud data entry simplifies the process of storing data on IPFS, resulting in a 10-fold increase in data rewards and reduced data storage overhead for enterprises. Additionally, Huawei Cloud's large-ratio EC technologies enable 91% effective capacity usage of disks and scalable storage for data up to 10 EB-level. With the elastic cloud deployment solution, users can choose offline encapsulation, cloud verification, or end-to-end cloud deployment. Huawei Cloud, with over 18 years of experience in enterprise-level storage, aims to deliver cloud storage services that are not only cost-effective but also highly efficient.

Providing "Everything as a Service", Huawei Cloud aims to offer robust infrastructure that enables advanced, distributed computing power for Web3. This commitment extends to building an open, efficient, and secure blockchain platform that can foster innovation in the Web3 ecosystem.

SOURCE Huawei Cloud APAC

Read the original post:
Huawei Cloud in Token2049: Fueling Web3 Advances with Key ... - PR Newswire

Read More..

Bringing your Microsoft workloads to AWS: Why and how you might … – The Stack

Microsoft workloads have been the backbone of organisations IT strategies for decades: 70% of all Enterprise Applications are Microsoft-based and over three quarters of those are still running on-premise. The technology in use is likely to include legacy .NET versions running on Windows Server and SQL Server databases licensed with an Enterprise Agreement, writes Rhys Jacob, CTO at AWS consultancy D55

But as cloud technology has evolved, and digital transformation continues to expand and reach new heights, Microsoft Workloads run on-premises arent fit for purpose. In fact, 48% of on-premise Microsoft workloads are forecasted to migrate in the next two years, making it one of the largest market opportunities in core IT and strategic focuses for CIOs and CTOs.

Businesses that remain running on these workloads on-prem face a range of challenges; the inability to scale their infrastructure for peaks in demand (unless theyre willing to spend a lot to do it), typically high operational and maintenance costs for their growing on-premise technology, inability to be agile without wholesale code changes, and not being able to digitally transform quick enough, impacting their ability to remain competitive.

Despite these barriers for growth, the question of whether to migrate Microsoft workloads, particularly onto non-Microsoft cloud services like AWS, can be a difficult one for CTOs to answer. If teams are thinking about migrating Microsoft Workloads to the cloud, Microsoft Azure might seem like the obvious choice but in reality, AWS is running more Microsoft applications and at a better price.

While the thought of migrating all your Microsoft workloads onto the cloud may seem daunting, it has become such common practice that providers like AWS have streamlined the journey to the point where application downtime is non-existent.

The first step is understanding what needs to be migrated and what workloads can be retired is key. Once this is clear, applications can begin to be reallocated to EC2 Windows Servers and then ultimately to ECS/Lambda. This supports the lift and shift process, including SQL Server databases, which for most enterprises delivers immediate benefits, like reducing costs and increasing agility. The temptation for many here, however, is to pause the transformation journey once they have realised the immediate return on investment of a successful lift and shift.

As with any migration to the cloud, however, to gain the real benefits of it, including improved agility, speed to market, reduced cost, better scalability and reduced energy consumption, businesses must take a cloud native approach to their data systems. This means modernising and platform optimisation.

This can be done, to some extent, through re-platforming, whereby organisations migrate their applications without making wholesale changes to the architecture or code. This approach also means organisations can migrate their on-premise SQL Server databases to Amazon Relational Database Service (Amazon RDS). In doing so, businesses can continue using SQL Server but are no longer required to undertake time intensive tasks such as installation, patching, configuration and updates. However, the costly licensing fees attached to SQL Server will continue.

For the full benefits of cloud, re-factoring and re-platforming, which typically involves application changes and entire re-architecture strategies, is the step which allows organisations to truly untap the potential of cloud technology.

Here, businesses that no longer want to continue paying the licensing costs for SQL Server can move their database across to AWS Aurora, a fully managed database built for the cloud, and AWS Babelfish, which allows Aurora to understand queries from applications written for SQL Server, completing the database modernisation.

Meanwhile to re-platform a business applications, converting it to dotnet core and running the application on AWS Linux instead of Windows not only saves on the Windows Server license fee, but it also allows further re-architecture for greater modernisation, and supports organisations in becoming fully cloud-native. This means breaking down existing and legacy monoliths into more maintainable microservices, allowing each microservice to adapt and grow independently. Having a clean separation between microservices allows developers to focus on individual services without it impacting the broader system.

Crucially, microservices also allow applications to communicate with one another via. Put simply, when one application or service emits an event, other applications can be notified and decide whether or not they need to do anything with that data.

The benefits of modernisation are far reaching. From enhanced security to increased flexibility and lower licensing and consumption costs, organisations can unlock huge growth potential once their cloud infrastructure is optimised to support business objectives. Its these benefits that have made 84% of AWSs customers prioritise application modernisation in the next two years.

Migrating Microsoft workloads to AWS may seem drastic, but the process to getting there has never been more streamlined thanks to AWS technology. In fact, AWS now has a 45% larger share of Microsoft workloads on the cloud than the next largest cloud provider, and its why our industry has reported a 23% increase in CXO level modernisation conversations. The process of migrating workloads is now becoming just a matter of time for most organisations.

Read more:
Bringing your Microsoft workloads to AWS: Why and how you might ... - The Stack

Read More..

Proof-of-Work on Blockchain Explained – LCX – LCX

The Significance of Proof-of-Work

Proof-of-work serves multiple essential purposes within the blockchain ecosystem. Firstly, it ensures the distributed consensus required for validating transactions and maintaining a single version of the truth across the network. Secondly, it acts as a deterrent against malicious actors attempting to manipulate the system by requiring significant computational resources and energy expenditure. Lastly, PoW serves as an incentive mechanism, rewarding miners with newly minted cryptocurrency tokens for their computational efforts.

Transaction Validation:

When a user initiates a transaction on the blockchain, it gets broadcast to all nodes within the network. Miners collect these transactions and group them into blocks. Before adding a block to the chain, miners need to validate the transactions within it.

Hashing:

Miners utilize cryptographic hash functions, such as SHA-256 (used in Bitcoin), to create a unique digital fingerprint of the blocks data, including the transactions and a reference to the previous block. The output of this hashing process is called a hash.

Mining Difficulty:

To control the rate at which new blocks are added to the blockchain and maintain consistency, the network adjusts the mining difficulty periodically. The difficulty is determined by the target value set for the hash. Miners must find a hash value that meets this target, which is typically achieved by manipulating a value called the nonce.

Finding the Nonce:

Miners iteratively change the nonce value in the blocks header until they find a hash that meets the difficulty target. Since the hash function is deterministic, miners need to perform numerous computations (hash attempts) by varying the nonce until they discover a valid hash.

Proof-of-Work:

The miner who successfully finds a valid hash, which meets the required difficulty level, broadcasts it to the network. Other participants can easily verify the validity of the hash by applying the same hash function and comparing the result to the target.

Block Addition and Rewards:

Once a valid hash is found, the miner adds the block to the blockchain, including the hash of the previous block, and propagates it throughout the network. As a reward for their efforts, the successful miner receives a predetermined amount of cryptocurrency tokens, often along with transaction fees associated with the transactions in the block.

The Proof-of-Work consensus mechanism has the following problems:

The 51% risk: If a controlling entity possesses 51% or more of network nodes, it can corrupt the blockchain by gaining control of the majority of the network.

Time-consuming: To discover the correct solution to the puzzle that must be solved to mine the block, miners must examine numerous nonce values, which is a time-consuming process.

Resource consumption: In order to solve the difficult mathematical puzzle, miners use a substantial amount of computing capacity. It wastes valuable resources (money, energy, space, equipment). By the end of 2028, it is anticipated that 0.3% of the worlds electricity will be used to verify transactions.

Not instantaneous transaction: Confirmation of a transaction typically takes 10 to 60 minutes. Because it requires time to mine the transaction and add it to the blockchain, thus committing the transaction, the transaction is not instantaneous.

Proof-of-work is a robust consensus algorithm that has revolutionized the world of cryptocurrencies by providing a secure and decentralized system. By employing computational work, PoW ensures the integrity of transactions and prevents malicious activities within the blockchain network. While it has been successful in many cryptocurrencies, the increasing energy consumption associated with PoW has raised concerns about its long-term sustainability. However, ongoing research and the development of alternative consensus algorithms continue to explore more energy-efficient and environmentally friendly options for securing blockchain networks.

More:

Proof-of-Work on Blockchain Explained - LCX - LCX

Read More..

AI and cloud computing: A symbiotic relationship or a hostile … – Lexology

Artificial intelligence (AI) and cloud computing are both popular buzzwords generally referred to as separate concepts without considering the potential for interplay and interconnectedness between the two technologies. Companies could benefit from incorporating cloud computing functionality into their AI operations. For example, AI algorithms can be trained and deployed on a far greater capacity using cloud computing rather than on local servers. Cloud-based AI can also process vast quantities of data through cloud computing infrastructure. This article explores the potential for synergy between cloud computing and AI algorithms while identifying the dissonance that might occur as a result of the collaboration.

The synergy between AI and cloud computing

Cloud-computing and AI tools are individually relied on as technological developments that have provided convenience to users and, to some extent, have disrupted existing industries. When combining the two, the company may experience the following benefits:

The dissonance between AI and the Cloud

As with all technological developments and tools, there are various risks associated with the use of cloud-based AI, which includes (without limitation) the following:

Cloud-based AI provides an opportunity to significantly enhance a company's AI capabilities and companies that are first to embrace cloud-based AI could obtain a competitive advantage when doing so. However, these advantages may only be realised when a company has carefully considered the cloud marketplace, engaged in vendor due diligence processes, and entered favourable cloud service agreements. failure to maintain proper vigilance over this process could lead to hidden expenses that may outweigh the benefits of cloud-based AI. It is essential to have legal support during this process to ensure that your risk exposure is reduced or mitigated.

More:
AI and cloud computing: A symbiotic relationship or a hostile ... - Lexology

Read More..

Reducing disaster risk for the poor in tomorrow’s cities with … – Nature.com

Computational science, including numerical simulation through high-performance computing, data analytics and visualization, can underpin the SDGs. This is especially true when deployed in collaboration with other scientific domains and as part of co-produced knowledge-generation processes with a range of urban stakeholders and end users. By acknowledging the systemic nature of the causes of disasters, such research must facilitate the inclusive engagement of scientists, engineers, policy-makers, economists, private sector groups and, critically, representatives of the citizens who will live in the cities experiencing rapid growth. The SDGs recognize that urban development plans made today will either brighten or blight the lives of citizens for centuries.

A three-part agenda for interdisciplinary science marks out how computational science can be used to underpin and catalyze this ambition. Each step in the agenda can stand alone or together form a structured process from better understanding to better action to reduce disaster risk in future cities.

Digitally capturing inclusive future visions

Many social science methodologies are available with which to solicit preferences for neighborhood or city-wide futures. A challenging task is enabling such methodologies to capture the subjective visions of the future of diverse urban stakeholders. Only by doing this can future cities disrupt established norms and consider what a safer city of the future looks like from different or multiple perspectives. The outputs of such methodologies are difficult to translate into policy options; they are often qualitative and can appear imprecise to policy-makers. However, such qualitative information has a huge potential to act as a basis for future urban scenario development if it can be assimilated into precise digital representations. Computational science can help here. Spatial components of the projections (such as desired land-use zones and their attributes) can be translated into land-use plans using geographic information systems and related computational tools. This information can be complemented with predicted patterns of urban growth, determined using machine learning algorithms that rely on remote sensing data. Spatial priorities emerging from stakeholder groups are thus rendered into high-resolution digital representations of possible urban futures8.

Such digital future cities also incorporate detailed attributes of people and assets. These include engineering characteristics for each building and infrastructure component and system, information on socio-demographics for each individual and household, and data on socio-physical interdependencies (for instance, where each person goes to work and where each child goes to school). The virtual representation is achieved using several computational models, including synthetic population-generation algorithms, human mobility methods, procedural modelling and optimization processes8,9.

Exposing digital futures to likely hazard events

The high-resolution virtual representations of possible future urban developments must be exposed to hazard events that are consistent with the hydrological and geophysical environment of the city. A series of hazard events can be selected to cover possible life-cycle experiences of the development. A key effort in the Tomorrows Cities project, for example, has been to code these events into high-resolution, physics-based simulations10 (Fig. 1), taking advantage of the latest developments in high-performance computing. A custom-developed web-based application merges site-specific intensity data from the hazard event with exposure and vulnerability information in the future development scenario to compute the likely impact of any particular event. These calculations use several underlying computational tools, including high-resolution, multi-hazard fragility models developed from detailed building-level numerical performance assessments11 and data-mining models that can distinguish the magnitude of disaster impacts on the basis of social vulnerability indicators12,13. Agent-based modelling is another powerful approach in the field of disaster simulation that allows researchers to simulate the dynamic behavior of individual entities (agents), with their socio-economic features, within a complex system.

Disaster reduction programs often depend on externally developed solutions imposed on specific local challenges. Computational science provides digital tools that can support innovative capacity strengthening, freeing possible futures thinking from the responsibility for real lives and encouraging experimentation with innovative planning solutions. a,b, Here, the virtual city of Tomorrowville is shaken by a virtual earthquake (a) and flooded by a virtual extreme rainfall event triggered by climate change (b), exposing spatial variability in exposure and driving reconsideration of spatially uniform building regulations.

Depending on the particular scenario, multiple impact metrics reflecting diverse aspects of the lived experience (for instance, number of deaths, number of displacements, number of injuries, hospital occupancy, lost days of production or school and total replacement costs) can be calculated and mapped, providing a detailed picture of the total impact of any disaster event resulting from the decisions and policies that generated the specific digital future being tested. Each of these metrics can be disaggregated in different ways, including by age, gender, income or any other attribute contained in the demographic dataset of the virtual future representation, providing an understanding of the consequences of the decisions made during planning and scenario building.

To complete the picture of the root causes of disaster impact for policy-makers, quantifying and mitigating social vulnerability (the susceptibility of an individual from a given group to the impacts of hazards) can help to build resilience to multiple types of hazard shock. So far, there is a dearth of disaggregated data recording disaster impacts and social vulnerability measures simultaneously, and the current priority is to collect longer data series. These might emerge, for example, from satellite remote sensing; computational methods in unsupervised learning and data clustering as well as deep learning (for instance, neural networks) could then be leveraged to refine quantitative modelling of social vulnerability. Exploring nonlinear and multi-scalar relations between exposure, vulnerability and disaster impacts is an important research ambition14.

Convening risk agreement and institutional learning

Impact is objective, but risk depends on personal or group priorities; the value of property replacement, for example, has a different priority depending on whether or not you own property. Computational science supports interactive representations of complex urban impact scenarios, facilitating the quantification of subjective risk priorities by generating impact-weighting matrices that include the voice of marginalized groups in the local definition of disaster risk. Equipped with weighted risk definitions, attention turns to exposing the root causes of such risk in the choices and decisions behind any development plan. Dynamic digital visualizations of the impact metrics produced by simulation-based tools could help to elucidate the distribution of risk inherent in development planning and to diagnose risk drivers, inverting complex causal chains and exposing the underlying flaws in decision-making. In the case of the Tomorrows Cities Hub, this is communicated to stakeholders through the web-based application. More formal inversions uncovering root causes from impact metrics are needed to clarify the diagnosis and reinforce evidence-based decision-making for risk reduction.

Focusing on the origins of risk in the decisions, policies and assumptions underpinning future development scenarios allows stakeholders to examine their choices and reflect on broader governance questions. Modifications to particular stakeholder priorities that are likely to lead to reduced risk are implemented in the digital development scenarios. These are then subjected to the same simulated hazard events to test the resulting risk reduction. The process is iterated, optimizing the future for lower risk, elucidating the effectiveness of governance processes and supporting evidence-based decision-making.

Original post:

Reducing disaster risk for the poor in tomorrow's cities with ... - Nature.com

Read More..