Page 1,436«..1020..1,4351,4361,4371,438..1,4501,460..»

Gnosis Chain spends $5M on validator incentive program for decentralization – Cointelegraph

Gnosis Builders, developer of blockchain network Gnosis Chain, has announced a $5 million project to increase the number and diversity of validators through incentive mechanisms. The new project is called Gnosis VIP, according to an April 18 announcement from the company.

As part of the new project, Gnosis is launching a Geographic Diversity Program that seeks to increase the number of countries Gnosis Chain validators are located within.

The network currently has over 100,000 validators spread across 60 countries, and the programs goal is to increase the number of countries to 180 by years end, the announcement said.

According to the programs official webpage, for each of the 90 countries listed, the first ten validators that start operating within them will receive 388 meta Gnosis (worth $1,368.18 at April 12 prices) over the course of six months. Meta Gnosis (mGNO) is the wrapped and staked version of the networks native coin, Gnosis (GNO). Each mGNO can be redeemed for 1/32 GNO.

The first payment of 38 mGNO ($134) will be disbursed after the first 30 days the node operates. The size of the payment will increase each month, and the last payment at the end of the six months will be for 98 mGNO ($345.57).

Related: 1Inch network expands to Gnosis Chain and Avalanche

In an email statement to Cointelegraph, Gnosis CEO Martin Kppelmann expressed hope that the new program will help to improve both the security and performance of Gnosis Chain:

Debates often rage in the crypto community over which networks are the most decentralized, with many experts claiming that a network cannot be scalable, secure, and decentralized at the same time. This conflict in design philosophy is often called the blockchain trilemma.

In his email statement,Kppelmann emphasized that geographical diversity is only one aspect of decentralization, and others are also important to ensure resilience and security.

Read the original post:

Gnosis Chain spends $5M on validator incentive program for decentralization - Cointelegraph

Read More..

Top 10 Web3 Impacts on Traditional Industries Due to Decentralization – Analytics Insight

The top 10 web3 impacts on traditional industries due to decentralization are enlisted in this article

The Top 10 Web3 Impacts on Traditional Industries Due to Decentralization, it can disrupt and alter sectors ranging from banking and art to healthcare and education. Web3 has had a tremendous influence on the financial industry. Blockchain-enabled Decentralized Finance (DeFi) protocols enable peer-to-peer lending, borrowing, and trading without the need for middlemen.

1. Decentralized Finance (DeFi): Web3 is revolutionizing the financial sector by introducing DeFi protocols that enable peer-to-peer lending, borrowing, and trading without the use of middlemen. This upends established banking and investing paradigms by giving customers greater control over their cash and allowing them to earn higher profits.

2. Digital Art: The introduction of non-fungible tokens (NFTs) and blockchain-based digital art markets has transformed the art business. This decentralized technique allows artists to monetize their work without the need for intermediaries while also ensuring the arts validity and provenance.

3. Supply Chain Management: Web3 is changing supply chain management by offering a transparent and decentralized ledger of all transactions, allowing for increased supply chain transparency and efficiency.

4. Cybersecurity: lockchain-based decentralized security protocols are more secure and less resistant to attack than centralized ones. This has the potential to disrupt existing cybersecurity approaches, providing people and companies with enhanced safety.

5. Education: Decentralized education systems increase access to education while also introducing new financial options for instructors. Peer-to-peer learning and credential verification can be facilitated via Web3-enabled applications, upsetting established education structures.

6. Healthcare: Decentralized healthcare solutions based on blockchain technology provide improved security and transparency in the exchange and management of medical data. This innovation has the potential to completely revolutionize the healthcare business by improving patient outcomes and lowering costs.

7. Digital Identity: Web3 enables self-sovereign identification, giving people authority over their digital identities and data. This has the potential to disrupt established identity verification approaches while also providing consumers with improved privacy and security.

8. Social Media: Decentralized social media platforms challenge established business models by giving users more control and ownership over their data. This can change the advertising business while also providing consumers with improved privacy and security.

9. Real Estate: Traditional real estate models are being disrupted by Web3-enabled platforms that enable peer-to-peer property transactions and eliminate the need for intermediaries. This disruption can completely revolutionize the real estate sector by making it more accessible and inexpensive to individuals.

10. Governance: Through the usage of DAOs, Web3 enables decentralized decision-making. (Decentralized autonomous organizations). This has the potential to upend existing governance paradigms by allowing for more participative and democratic decision-making processes.

Original post:

Top 10 Web3 Impacts on Traditional Industries Due to Decentralization - Analytics Insight

Read More..

Anndy Lian on decentralization and the potential impact of Web4 – CryptoSlate

What is CryptoSlate Alpha?

A web3 membership designed to empower you with cutting-edge insights and knowledge. Learn more

Welcome! You are connected to CryptoSlate Alpha. To manage your wallet connection, click the button below.

If you don't have enough, buy ACS on the following exchanges:

Access Protocol is a web3 monetization paywall. When users stake ACS, they can access paywalled content. Learn more

Disclaimer: By choosing to lock your ACS tokens with CryptoSlate, you accept and recognize that you will be bound by the terms and conditions of your third-party digital wallet provider, as well as any applicable terms and conditions of the Access Foundation. CryptoSlate shall have no responsibility or liability with regard to the provision, access, use, locking, security, integrity, value, or legal status of your ACS Tokens or your digital wallet, including any losses associated with your ACS tokens. It is solely your responsibility to assume the risks associated with locking your ACS tokens with CryptoSlate. For more information, visit our terms page.

Link:

Anndy Lian on decentralization and the potential impact of Web4 - CryptoSlate

Read More..

Pharma Clinical Trials: Decentralization and Digital Trends – The Medicine Maker

Decentralized trials (DCTs) are not a novel concept, but the disruption of nearly68 percent of clinical trials during the height of the pandemic led to more widespread interest in and adoption of hybrid and virtual trial models.Studies have shown that DCTs can lead to shorter development cycle times, lower clinical trial screen failure rates, and fewer protocol amendments. With wider DCT adoption, however, comes an increase in data sources (particularly external data sources), leading to a surge in data volume. Good data management is crucial to control this. From acquisition and analysis to data cleaning and statistical processes, data managers are the stewards responsible for guiding a modern data strategy amid the perfect storm of growing data complexity, digitization initiatives, and ever-increasing pressure to accelerate timelines.

In the past, data management was often siloed, focused on cleaning and querying listings of electronic data capture (EDC) data. However, non-EDC and external data sources now contribute significantly to overall data volume. The percentage of data coming from outside EDC continues to rise, while a rise in outsourced models has prompted the data management role to become more oversight focused.

As DCTs become more widely adopted and as the volume of disparate data continues to grow, data management processes will become even more complex. An Industry Standard Researchsurvey in 2019 revealed that 38 percent of pharma and contract research organizations anticipated DCTs to make up a large portion of their portfolios, and 48 percent expected trials to operate with the majority of activities taking place from the participants homes. When revisiting the same questions only one year later, all of the respondents anticipated decentralized trials would make up a significant portion of their research profiles.

Although remote participation is pleasing for patients, it results in even greater data source volume and variety, which is difficult for clinical trial teams to manage.Research from the Tufts Center for the Study of Drug Development in 2019 found 75 percent of life sciences organizations still using SAS and Excel to integrate and analyze data. Over 80 percent of respondents reported data management activities as time consuming and labor intensive.

The same study also found that over two-thirds of clinical trial sponsors were using or piloting at least four types of data. The number of sources has nearly doubled since then and will continue to rise as DCT models are widely operationalized. A 40 percent increase in last patient, last visit (LPLV) to database lock cycle times for companies with five or more data sources was reported; the study concluded that contending with disparate data sources was contributing to longer database lock cycle times. In our services organization, trials frequently average eight or more data sources but many include over 15!

The trends that contributed to the Tufts study findings have only accelerated since the onset of the pandemic, which means one thing: data chaos. If the industry doesnt adopt new approaches, data management will only get more challenging.

Identifying and creating a data strategy roadmap in the midst of these growing pains can present a challenge, but it is essential, if you want your organization to be ready to face the future. The increased adoption of virtual and DCT approaches to clinical trials necessitates a balance between the use of advanced solutions that connect trials with a greater number of patients, and maintaining efficient, high-quality data review and analysis. Improving the overall patient experience is a motivating factor for DCTs, as is easing the burden of traveling to and from sponsor sites. The problem is that many organizations lack the infrastructure to accommodate the shift.

Operational leaders plagued by oversight and monitoring challenges in DCTs need methods to streamline and standardize data from increasingly non-traditional sources. However, there are now a number of data solutions available in the industry that can help. Below are just two examples of how companies are using data management platforms to help manage more external data streams.

With an increasing volume of external data streams, Bristol Myers Squibb (BMS) sought out a data management platform that would integrate with and support its current EDC platform, while also supporting data curation and aggregation. Implementing the platform streamlined clinical data flow, providing quicker access to clean data, streamlined data acquisition, and mapping and standardization all of which resulted in faster access to data by downstream teams. The platform alleviated pain points experienced with BMS previous infrastructure by compiling all data into a unified source, giving the company the ability to create cross-study analytics reports for deeper insights.

A second example:Karyopharm Therapeutics worked on randomized clinical trials with hospital patients suffering from severe COVID-19 it was the first study of an XP01 inhibitor in patients with viral infections. To support rapid data collection, cleaning, and review for this program, Karyopharm partnered with a data management platform, working closely to build a fully validated database to collect data from physicians and patients in just 15 days. This accelerated timeline enabled Karyopharm to meet the first patient milestone in its critical research initiative.

Cloud-based centralized data management platforms allow clinical trial teams to manage their data more efficiently, mitigate costs, minimize timeline delays, and improve cycle timelines. Put simply, cloud-based platforms modernize data infrastructure by compiling data sources into a unified source of truth. By implementing a cloud-based platform with expert data configuration, management, and statistical analysis, some companies have seen up to a50 percent decrease in cycle time was experienced from LPLV to database lock in 2021.

In short, the right data management tools facilitate data transformation, delivering consistent real-time updates and allowing researchers to analyze data faster and uncover insights needed for critical decision making. Moreover, identifying areas and opportunities to pivot earlier in the trial process can help prevent avoidable delays. To keep up with the evolving clinical trial landscape, companies must employ a modern strategy, which requires three core elements: an interoperable approach to DCTs and other non-traditional trial models, investment in resources to ease the data management burden, and the ability to generate meaningful insights from a good data platform.

Read the original here:

Pharma Clinical Trials: Decentralization and Digital Trends - The Medicine Maker

Read More..

[Local and Beyond] ‘Decentralization key to curing metropolitan sickness’ – The Korea Herald

North Gyeongsang Province Gov. Lee Cheol-woo (North Gyeongsang Province)

ANDONG, North Gyeongsang Province -- South Korea has achieved remarkable economic success after the 1950-53 Korean War, but faces grave challenges ahead as the country suffers from rapidly aging population, low birth rate, and unhappiness.

North Gyeongsang Province Gov. Lee Cheol-woo, who also heads the Governors Association of Korea, believes that the quality of Koreans' lives is being deteriorated by what he calls "metropolitan sickness.

The phenomenon of metropolitan sickness is attributed to the fact that over half of the country's population resides in what is known as the Greater Seoul area -- consisting of Seoul, Incheon and Gyeonggi Province -- despite it occupying only less than 12 percent of the country's land, according to the governor.

The Greater Seoul area is also home to 86.9 percent of the nation's 1,000 largest companies. More than 100,000 young people migrate from rural areas to the Greater Seoul area in search of better paying jobs.

There is no future for South Korea if the phenomenon of concentration in the Greater Seoul area is not resolved, he said in an interview with The Korea Herald.

Overpopulation in the capital region has pushed up real estate prices and competition for jobs. Overwhelmed by the heated race, young people in the city abandon their dreams of having a family, he said. This vicious cycle threatens not only young peoples' quality of life, but also even the nations existence, he added.

According to Statistics Korea, the population of the Greater Seoul area has grown by 14 percent from 2002 to reach 23 million as of March 2023. In contrast, the population in the rest of the country has risen only 1 percent.

Despite the challenges, Lee has pinned his hopes on the Yoon Suk Yeol administration, which has placed a strong emphasis on promoting local autonomy and equal opportunities for all citizens. The Yoon administration has set the goal of offering people equal opportunities regardless of where they live, which Lee believes can resolve the current unbalance between the Greater Seoul area versus the rest of Korea.

In a meeting with the heads of local governments in February, President Yoon emphasized the need to decentralize its power -- which is heavily concentrated in Seoul and its vicinity -- and transfer some of it to local governments.

Lee noted that previous administrations introduced special laws aimed at promoting balanced national development, but they failed to address the issue on a national level.

This is because they tried to promote balanced national development "from the central governments perspective, he said. The central government has led everything without delegating sufficient authority and budget to regional governments.

Gov. Lee believes that promoting regional competitiveness is essential, and that this can be achieved through diversification of industrial structures, cultural offerings, growth models, lifestyles and values.

By promoting such diversity, each region can develop its unique strengths and advantages, ultimately contributing to the overall competitiveness of the nation, he said.

He is actively taking the lead in promoting regional competitiveness in his own province. He is working to diversify the provinces industrial structure and enhance its cultural offerings, growth models, lifestyles and values to promote greater regional competitiveness.

North Gyeongsang Province is home to several cities with strong industrial bases, such as Pohang and Gumi.

Pohang is a major hub for the steel industry, and home to the headquarters and production plants of companies such as Posco, Hyundai Steel and Dongkuk Steel. Meanwhile, Gumi is an important hub for the electronics industry. The city boasts large-scale business districts and research institutes of major companies such as Samsung Electronics, Samsung SDI, LG Electronics, LG Display, Hanwha Aerospace, LG Innotek and SK Siltron.

The governor's plans for promoting the province's regional competitiveness include driving growth and development in key strategic industries such as the chips industry, which will strengthen the electronics industry in Gumi. He also plans to focus on the battery and biotechnology industries, which have the potential to become significant contributors to the province's economy beyond the steel industry centered in Pohang and Andong. In addition, efforts will be made to elevate the agricultural, livestock and fishery industries into high-tech, export-oriented and profitable sectors, with a particular emphasis on innovation and sustainability.

By pursuing these strategies, Gov. Lee hopes to enhance the province's overall competitiveness and contribute to the growth and development of the country as a whole.

In order for balanced national development to be possible, and for people to be able to move from the Greater Seoul area to other regions, there must be ample opportunities available in the provinces, he said.

North Gyeongsang Province will innovatively lead the "provincial era," the governor said, adding that central and local governments must work together for shared growth and development.

Gov. Lee began his political career in 2008 by winning a parliamentary seat for the constituency of Gimcheon, North Gyeongsang Province, where he was born and raised. After serving three terms, he was elected the governor of North Gyeongsang Province in 2018 and appointed the chairman of the Governors Association of Korea last year. Established in 1999, the association represents the common interests of local governments and aims to promote balanced development.

By Shin Ji-hye (shinjh@heraldcorp.com)

See more here:

[Local and Beyond] 'Decentralization key to curing metropolitan sickness' - The Korea Herald

Read More..

Moving beyond the blockchain trilemma: L1 vs. L2 – Cointelegraph

As of February 2023, over 44.15 million unique addresses have a non-zero balance of Bitcoin (BTC). While this may seem impressive, lets face it blockchain technology has come a long way since Bitcoins inception in 2009.

Bitcoin addresses with a non-zero balance. Source: Glassnode

However, as the technology continues to evolve and gain mainstream adoption, scalability remains one of the biggest challenges facing the industry. Bitcoin and Ethereum, two of the largest blockchain networks, are highly decentralized, with thousands of nodes operating on each network (17,553 nodes for Bitcoin and 7,099 nodes for Ethereum as of April 14, 2023).

Ethereum mainnet statistics. Source: Ethernodes

While this decentralization provides greater security, it also results in slower transaction speeds and scalability issues due to the significant computational resources required to maintain the continuously growing sum of nodes.

Hence, the blockchain trilemma, coined by Vitalik Buterin, suggests that blockchains can only have two out of three properties: scalability, security and decentralization. As a result, this fundamental trade-off represents a significant barrier to the widespread adoption of blockchain technology.

There are two primary strategies that have been introduced to tackle the scalability challenge: layer-1 (L1) and layer-2 (L2) solutions. While L1 solutions seek to optimize the base layer of a blockchain, L2 solutions provide an additional layer on top of the base layer to facilitate faster and more affordable transactions. Needless to say, this has sparked an ongoing battle between the two approaches as each demonstrates unique strengths and weaknesses.

Layer-1 blockchains, such as Bitcoin and Ethereum, are designed to optimize the foundational layer of a blockchain protocol to increase transaction throughput and reduce fees. Their maximum capacity is often limited by network congestion and other factors, so L1 scaling solutions directly extend the blockchain protocol to improve scalability.

A prominent example of this is the introduction of Ethereum 2.0 and the subsequent development of (dank) sharding. Sharding aims to increase Ethereums transaction processing speed and reduce fees by splitting the network into smaller, more manageable shards. Each shard can then process transactions in parallel, significantly increasing the networks overall speed.

Layer-2 blockchains, on the other hand, refer to a network or technology that operates on top of an underlying blockchain protocol with the aim to improve scalability too. The idea behind L2s is to shift transactions from the base layer blockchain to an adjacent system architecture that is able to process the majority of the data and then report back to the base blockchain to finalize the result.

For instance, Ethereum is a layer-1 network, and a number of layer-2 solutions have been built to improve transaction speeds on the Ethereum network, including Polygon (MATIC), Optimism (OP) and Arbitrum (ARB).

Undoubtedly, the scalability battle has come to the forefront with recent developments in L1 and L2 blockchains. While this may be the case, understanding the differences between L1 and L2 blockchain networks is crucial to gain insight and distinguish the primary differences between both layers.

Layer-1 blockchains and layer-2 scaling solutions differ not only in their purpose but also in their fundamental design and architecture. L1 blockchains are designed to be self-sufficient, meaning that all the necessary layers for data availability, consensus, and execution are integrated into a single system. This design is intended to provide the security, decentralization, and immutability, that are the hallmarks of blockchain technology.

In contrast, layer-2 scaling solutions are designed to enhance the performance of L1 blockchains rather than operate as independent blockchains. Layer-2 scaling solutions use off-chain techniques such as state channels, nested blockchains, rollups and sidechains to process transactions faster and more efficiently. In this way, layer-2 scaling solutions can increase the transaction throughput of L1 blockchains without compromising their security and decentralization.

Another significant difference between L1 and L2 scaling solutions lies in their scalability methods. L1 blockchains depend on various techniques such as consensus mechanism changes, chain forking and sharding to boost their transaction throughput. While these methods can improve transaction speeds, they can also lead to network congestion, security risks and fragmentation. L2 scaling solutions, on the other hand, process transactions off-chain, allowing for increased speed and efficiency while still relying on the primary network for security and decentralization. This approach reduces the risk of network congestion, minimizes fragmentation and enhances the overall performance of the blockchain ecosystem.

The Nakamoto coefficient is an important metric to consider when evaluating the level of decentralization in a blockchain network. It is crucial to consider the trade-off between scalability and decentralization when measuring up the difference between L1 and L2 solutions.

Often, L1 solutions such as Near protocol (NEAR) or Solana (SOL) have a higher coefficient because they offer a high degree of decentralization due to their reliance on a large number of validators. On the other hand, L2 solutions such as Opside or zkSync could offer improved scalability through the use of off-chain processing, but in turn, would be less decentralized due to their reliance on a smaller set of validators.

The ongoing battle between L1 and L2 solutions has its fair share of pros and cons. While L1 blockchains offer superior security and decentralization, they suffer from scalability issues. In contrast, L2 solutions offer scalability and lower fees, but may come at the cost of compromising the security and decentralization of the underlying blockchain.

Evidently, L2 solutions are not a one-size-fits-all solution to the scalability challenge. They rely on the base layers security and decentralization, and if the base is compromised, it could affect the very foundation of the layer-2 solutions in question.

Needless to say, as blockchain technology continues to mature, the outcome of this showdown will likely determine the path forward for scaling the technology to meet the demands of real-world applications. In the meantime, it is vital for both L1 and L2 solutions to work together to effectively address the scalability challenge.

Digi516 has been a crypto researcher and NFT enthusiast for almost a decade, with experience in educating and managing several crypto communities. Now, as head of community lead at XGo, Digi516 is on a mission to onboard the next 100 million users to Web3 and empower sovereign financial freedom.

Disclaimer. Cointelegraph does not endorse any content or product on this page. While we aim at providing you with all important information that we could obtain, readers should do their own research before taking any actions related to the company and carry full responsibility for their decisions, nor can this article be considered as investment advice.

Excerpt from:

Moving beyond the blockchain trilemma: L1 vs. L2 - Cointelegraph

Read More..

Bluesky Social, another Twitter killer – Manila Bulletin

Before the free-speech-absolutist who blocks and bans journalists who criticize him bought Twitter, Twitter announced a project to create a decentralized social network. Starting with the AT Protocol (ATP) specifications (seehttps://atproto.com/), the project, now a company, developed the reference implementation, IMHO, which is Bluesky Social (seehttps://bsky.app/).

Bluesky recently released its free iOS app (download athttps://apps.apple.com/us/app/bluesky-social/id6444370199). Whilst readily available, the service is still not open to the public, i.e., you will need an invitation to be able to create an account. Luckily I was able to secure one through a friend I met on Micro.blog (hihttps://maique.omg.lol!).

So what's it like to be under Bluesky? Creating an account starts with a screen that shows you Bluesky as one of the servers (it defaults to this as I don't know of any other instance running ATP at the moment), then it asks the invite code before you can proceed with your email and your bsky.social handle.

Once you are in, the interface is similar to the Twitter of old. Your timeline shows as default (Following) and then there's the popular timeline (What's Hot). There are four icons at the bottom: home, search, notifications and your profile. Pretty basic, yes. Mind you, it is not yet open to the public.

Bluesky has several priorities in the pipeline (seehttps://blueskyweb.xyz/blogto know what they're working on), so do not expect it to be at par with Twitter yet, not that it is planning to be exactly like Twitter! Heck, it is not yet at par even with Mastodon, Misskey, Calckey or Pleroma, considering it is barely a few months old. That being said, it is already functional as a social network!

Based on the AT Protocol, Bluesky is intentionally designed to be decentralized. Yes, decentralization is the future-enough with centralized services like Twitter, Facebook, Instagram, Tiktok, and others. If anything, decentralization and federation have been proven to work by the ActivityPub-based federated universe, aka fediverse. Don't believe naysayers saying that the fediverse user-base is shrinking (it is not), but look at how much engagement has grown instead! Anyway, at the moment, Bluesky is in its own decentralized universe running on ATP, and hopefully someone is working on bridging both ATP and ActivityPub in the future. So Bluesky is on its own (again, I have said that I have not see any implementation of ATP yet-I am sure there are developers working on this already) at the moment.

One thing that I noticed, content moderation and algorithmic timeline are high on their priority list. Out of the bat, the app already has MUTE capability, and with the recent app update, users can choose which type of content to filter out.

Another thing that I like is that there is subscription needed to get your account verified as yours. Similar to how it is done on Mastodon, where you link your account with your website, Bluesky, albeit requiring more technical-jitsu, does this by allowing you to link your handle from the default @.bsky.social to your own domain, e.g., me.null.dev. The assumption, like on Mastodon, you have full control of your own domain's DNS. Neat, huh?

If you were on Twitter during the early days, then you'd find Bluesky to be familiar-where everyone is friendly. If you are sick of the current Twitter (I am!), then go secure an invite by joining the waitlist at

Personally, I alternate between Bluesky and Mastodon, and yes, I find both to be far better than Twitter!

Continued here:

Bluesky Social, another Twitter killer - Manila Bulletin

Read More..

Decoding Digital Subscription Success – Egon Zehnder

The rise of digital subscription businesses has revolutionized the way we consume products and services. From entertainment to health and wellness, people want tailored experiences that can be delivered right to their phones or inboxes. But a looming global recession, unstable markets, rising inflation and cost of living has caused many people to cut back on subscriptions, prompting companies to focus on retention over acquisition.

Our conversations with more than 20 leaders in consumer subscriptions organizations over the past few months underscored the need for this strategy change, but also shone a spotlight on the internal impact these shifts have on priorities, organizational structures and leadership. In this article, we explore key actions leaders need to take now to ensure their teams are able to capitalize on this strategy shift instead of being hampered by it.

While centralized structures are often the most cost-effective and have a clear leadership structure, most digital subscription companies need an element of decentralization to be competitive today. Innovation and entrepreneurship tend to thrive in decentralized environments, where leaders feel and act like empowered owners and have the freedom to make decisions quickly and take necessary risks.

However, there are a few caveats to decentralization. When theres a wholesale shift in strategy (e.g., the change from acquisition to retention), functional teams can suffer if they don't have a voice in large-scale organizational decisions. Additionally, decentralization also often means there is duplicate work and positions. The willingness of a company to accept this duplication of effort depends on whether the primary focus is on growth or profitability. For companies focusing on growth, its more likely this duplication is part of the trade off in rapid scaling. For those focusing on profit, the cost of duplicative work is likely less palatable.

Before making a shift to a different organizational model because of a strategy change, take stock of what is working and where you can improve. It may turn out that you dont need to make a structural change, but instead you could work on developing employees muscles around communications and creating clarity across decentralized teams. Leaders can also focus on building cross-functional rituals to ensure communication is intentional and deepens relationships across teams.

If youre asking this question, you may have already reached the limit your current leaders and workforce can handle. Reorganizations in digital companies happen rather frequently, triggered by personnel changes, mergers and acquisitions, new business priorities and market expansion. However, even for the most agile employees, frequent reorganizations can take a toll. Employees who are part of companies that reorganize every six months or every year may experience burnout and stress among their teams, as they are expected to deliver results on their current projects while also implementing new operating models and ways of working at the same time. This high level of stress and expectation can erode employee morale over time, leading to burnout and disengagement from the purpose and outputs of the organization.

A finding from one of our recent studies across generations and roles in the workplace found that employees value their physical and mental wellbeing above all else, including compensation. While you may not be able to limit your reorganizations, consider offering your employees relevant additional perks. This could include paid time off for additional professional development or mental health days for recharging or simple ideas like eliminating internal meetings on Friday afternoons. Its also important that people feel heard when it comes to voicing their frustrations. Leaders need to actively listen and take stock of company morale. An environment that constantly feels unstable and employees who feel underappreciated will undermine productivity and engagement, potentially leading to an increase in talent turnover.

As subscription businesses shift from focusing on acquisition to retention, they need strong functional leaders across the critical areas of product, data, marketing, engineering and others, who are aligned across the companys goals, strategies, and decision-making processes. For example, your Chief Marketing Officer, Chief Product Officer and Chief Data Officer should operate in lockstep, with data at the core of driving both product development and product positioning. Bringing the strengths of each of these teams together creates a powerful cross-functional trio able to understand the business strategy from multiple angles and add context from each of their functions. For this deep alignment to be successful, these leaders must also possess high emotional intelligence, comfort with ambiguity and approach every challenge with the mindset of the customer.

In addition to being strongly connected across functions, these functional leaders should have a voice at the very top of the organization, with access to the CEO and the board when applicable. This is extremely important when new strategies and products are being considered. They will have the business insights that are critical to making informed decisions.

This doesnt mean endless WhatsApp and Slack messages at all hours. What it does mean is intentional time for connecting on shared leadership priorities and establishing agreed upon rules of engagement and a decision-making framework. Then these leaders must translate this shared passion for connection to their teams, which will result in higher employee morale and will benefit the business with greater alignment across functions. This collaboration may take some time to build, especially if your company is accustomed to frequent reorganizations. The teams you work with may shift and you need to devote time to building trust in these new relationships. It may also be a time to consider team coaching to help build stronger alignment across functions.

Digital subscription businesses that change customer strategy without considering their organizational structure and people may find themselves in danger of being canceledfirst by their employees and then by customers if they lack the talent to fulfill customer needs. Leaders must remember that customer centricity shouldnt come at the cost of their employees; businesses that thrive know that to build customer loyalty they need to build employee loyalty first.

Read more here:

Decoding Digital Subscription Success - Egon Zehnder

Read More..

Scientists engineer the first light-powered yeast – Science

Yeast are carb lovers, sustaining themselves by fermenting sugars and starches from sources such as dough, grapes, and grains, with bread, wine, and beer as happy byproducts. Now, researchers have made one type of yeast a little less dependent on carbs by enabling it to use light as energy.

The work, reported last week on the preprint server bioRxiv, is the first step in more complex modes of engineering artificial photosynthesis, says Magdalena Rose Osburn, a geobiologist at Northwestern University who was not involved in the research. It also recapitulates a key evolutionary transitionthe harnessing of light. It is extraordinary, says Felipe Santiago-Tirado, a fungal cell biologist at the University of Notre Dame. To some extent, its like turning an animal into a plant.

Well, not quite. To convert carbon dioxide into sugars that fuel life on Earth, plants rely on a protein complex that includes chlorophyll to shuttle both electrons and protons, which perform chemical reactions and transfer energy. Researchers have been working for years to recreate photosynthesis to explore how to use light more efficiently as an energy source for solar panels and other applications and to breed plantsand other organismsto be more productive.

But the chlorophyll complex requires many other molecules to do its job. So Anthony Burnetti, a geneticist at the Georgia Institute of Technology, and Georgia Tech evolutionary biologist William Ratcliff sought a simpler solution. They homed in on a protein known as rhodopsin, which doesnt require a large molecular entourage. Its a solution nature has settled on as well: Bacteria, some protists, marine algae, and even algal viruses use rhodopsin to convert light into usable energy, often to pump protons for cellular functions.

The researchers began by inserting a rhodopsin gene that belonged to a marine bacterium into brewers yeast (Saccharomyces cerevisiae) in a petri dish. Burnetti hoped the rhodopsin would find its way into the yeasts vacuole, an enzyme-laden sac that degrades unneeded proteins. An energy molecule called adenosine triphosphate (ATP) fuels the process by pumping protons into the vacuole to make its interior acidicoptimal for degradation.

Burnetti wondered whether light energy could do that job instead. But the teams first effort misfired when the rhodopsin protein made by the gene went to a different compartment known not for protein degradation, but for protein synthesis. So Burnetti looked instead for rhodopsin already known to exist in vacuoles. He settled on using one from corn smut, a fungal pathogen. By attaching a green fluorescent tag to the protein, he and his colleagues verified that it had localized to the yeasts vacuole, as they hoped.

Graduate student Autumn Peterson, a member of Burnettis team, went a step further to prove this engineered yeast was indeed using light. She grew the new strain in the same dish as the original, unaltered yeast and exposed it to green light, the wavelength rhodopsin is most sensitive to. The cells in the light-sensing strain had shorter lives but reproduced fast enough to outgrow the nonlight sensing yeast by 0.8%, the team found. Thats a massive advantage, says Santiago-Tirado. Over time, in the light, Peterson expects the light-using cells to eventually replace the unaltered ones just as early light users might have replaced their competitors in nature eons ago.

Burnetti and his colleagues think light induces the rhodopsin to pump more protons into the vacuole, relieving the cells need to expend ATP for this task and instead freeing up that energy to help the cell grow in other ways. Increasing the acidity inside the vacuole may decrease it outside the vacuole, causing enzymes there to work faster and wear out sooner, which may also help explain the higher death rate among these altered cells. Whichever way its working, It is clearly of benefit to the yeast cells, says Michael McMurray, a molecular biologist at the University of Colorado Anschutz Medical Campus.

But the experiment may not reveal much about how rhodopsin use evolved in nature. I think the authors overemphasize the evolutionary significance of their work, says Robert Blankenship, an emeritus biochemist at Washington University in St. Louis. This is an artificial construct and is not the product of natural evolution.

Others think the work can have industrial, medical, and basic research applications. Alaattin Kaya, a biologist who studies aging at Virginia Commonwealth University, says these yeast cells can help clarify why vacuole acidification over the life of a cell sometimes seems to cause mitochondria to malfunction and in turn accelerate aging. He would love to add rhodopsin to mitochondria themselves to observe its impact.

Burnetti would like to target mitochondria as well, but for a different reason. Even though it seems to have never happened in nature, we definitely plan to eventually put rhodopsin into the mitochondrion. Because mitochondria can make ATP efficiently, adding rhodopsin could provide a lot of energy directly from the Sun, just as photosynthesis does. In that regard, yeast would then be a little more like plants.

Read the rest here:

Scientists engineer the first light-powered yeast - Science

Read More..

Navigate the new world with digital engineering – TechRadar

The IT (Information Technology) industry as we know it has evolved drastically over the last two decades. This evolution typically follows a cyclical pattern, following an S curve. In the 1980s/90s, when the software industry flourished, it was all about reimagining and reshaping the way the world works. As a result, software became reliable and highly stable. Eventually, we began seeing the benefits of digitization, which led to the IT industry exploding. New services and new offerings were developed by integrating different technologies. Building the foundational technology was the first half of the S curve; maintaining it and coming up with newer services to make use of the foundations was the second half.

Today, technology enables businesses to incubate innovative ideas every day. Continuing the S curve, digital engineering is also reimagining the world we live in today. Digital engineering is all about building new products, new avenues, new business models, and newer technologies.

As digitalization has an ever-increasing impact on our lives, we are slowly transitioning into a physical age. This refers to the ability to move from digital to physical to digital again in an omnichannel environment, for example, you use a digital experience to buy a physical good which gets delivered to you, and value-added services which are again bolting onto a physical object but in a digital way, or you have a physical object which has a digital interface.

The new digital world will be filled with products, platforms, and experiences developed by digital engineers. In digital engineering programs, data (opens in new tab) is gathered and analyzed to create a digital twin. In the same way that software engineers develop programs, digital engineers create what is known as a BIM (Building Information Modelling), which includes information about a physical asset's design, construction, and future use. They aim to capture this data in an orderly and structured way from the outset of an engineering project, collaborating with other stakeholders to ensure quality. Digital engineering also includes drone imagery, augmented and virtual reality, internet of things sensors, advanced building materials, artificial intelligence, and machine learning. Combined with BIM, these technologies can be used to create a digital twin that accurately represents its physical counterpart in real time.

By using a digital twin, businesses can test and anticipate project outcomes, therefore understanding asset construction intricacies and reducing risks. The digital engineering process involves collaborative (opens in new tab) ecosystems that span across departments and demographics to identify, generate, and validate ideas, observations, and analyses quickly. In addition, the use of digital engineering enables engineers to design assets with maximum value at their core, enabling them to amplify asset efficiency.

Social Links Navigation

Rohit Madhok is Senior Vice President and Global Head of Digital Engineering Services at Tech Mahindra.

Humanity is reaching Mars and digitalization is at its peak. The emerging field of digital engineering, as well as related new-age technologies, is redefining how products are developed and manufactured for consumers by combining digital, physical, and virtual realms.

With digital engineering, employees will be able to work in new and more efficient ways, freed from the constraints of traditional engineering methods. Using digital twins, artificial intelligence (AI), and augmented and virtual reality (opens in new tab) (AR/VR), employees can resolve complex questions quickly and explore what is possible in a virtual environment. These technologies allow individuals to experiment with new products or processes and break things without fear of real-world repercussions. The ability to conduct this kind of experimentation without having to wait for things like physical prototypes or production lines to be built is hugely advantageous.

As digital engineering evolves further, it is integrating electromechanical engineering to further enhance IoT (Internet of Things) connectivity - a vital part of our digital world. Using digital twins, IoT connectivity can reach a level where all devices and experiences can become immersive. A car, for example, will become a digital medium for transportation, like the self-driving cars that are already available and present. With this technology, chips can be designed to provide IoT connectivity and digital touch points and interfaces within the car. Even if your car is in Germany and you are in the United States or India, you could still directly interact with it.

To fully leverage digital engineering, business leaders will have to overcome several challenges. Vulnerabilities, security breaches, and even human rights are among the most well-known risks. Due to the number of touch points with digital world, the human rights issue is the most important one at hand, since an individuals identity, persona, and digital footprint could be vulnerable to bad actors.

The carbon footprint is also an issue, due to the enormous cloud infrastructure that is running continuously. Many hyper-scale computing service providers such as AWS (Amazon Web Services), Microsoft Azure and Google Cloud Platform are aware of this issue and working towards the solution. As businesses now use cloud, cloud hosting providers can take on the burden of hosting and deploying IT infrastructure at scale more efficiently. It means organizations will consume less energy and produce much less end-of-life IT waste, which will reduce environmental pollution. In addition, as younger generations become more conscious of their carbon footprint and other environmental issues, we will see some significant improvements and overcome this challenge as a wider society.

A lack of skilled talent is another issue for the development of digital engineering and indeed, the wider technology sector. Businesses need the best creators, designers, and engineers. This means that they will need to boost their internal capabilities with creative hiring and resourcing strategies that not only bring in industry-leading talent, but also provide continuous upskilling with evolving time and technology.

Due to digital engineering, businesses could pivot quickly during the pandemic and tough economic times, allowing them to work virtually when needed and be more flexible in meeting customer demands. With so much disruption and accompanying uncertainty in the world today, the ability to solve issues swiftly and creatively will become increasingly important in the coming years. By integrating technologies such as AI (Artificial Intelligence) with digital engineering, tedious or repetitive process can be removed, and cost-effectiveness and efficiency can be increased.

The digital engineering revolution is unquestionably here to stay and will gather further steam in the coming years.

We've featured the best customer feedback tools. (opens in new tab)

Continued here:

Navigate the new world with digital engineering - TechRadar

Read More..