Page 158«..1020..157158159160..170180..»

Softcat partners with AI specialists Sysgroup on machine learning – City A.M.

Wednesday 29 May 2024 7:49 am

Softcat has partnered with artificial intelligence specialist Sysgroup, in a move bosses hope will expand its machine learning customer-base.

The companies have entered a strategic partnership which will see Sysgroup provide machine learning services for Softcat clients, as the firm looks to tap the ever-growing demand for AI tools.

Machine learning is technology that allows computers to mimic the way humans learn

Sysgroup, which is headquartered in Manchester following a move from Liverpool in February, was valued at 2.2bn in its last funding round, and reported revenue of 22.7m for the year ended 31st March 2024.

Softcats chief technologist Andrew Hermsen said: We are pleased to announce our strategic partnership with SysGroup, which represents a significant step forward in addressing the evolving needs of our customers in the dynamic and rapidly growing machine learning market.

This collaboration leverages the unique strengths of both companies, combining SysGroups innovative AI and ML solutions with Softcats deep expertise in the IT market.

Together, we are poised to deliver unparalleled value, helping our clients harness the transformative potential of machine learning to drive business growth and innovation.

As the opportunities in this space continue to expand, we are committed to providing cutting-edge solutions that empower our customers to stay ahead of the curve.

Sysgroups executive chairman Heejae Chae added: We are thrilled to have achieved Preferred Partner status with Softcat plc, marking a significant milestone in our journey to become a leading force in the AI and ML markets.

This partnership not only validates our innovative approach to AI and ML solutions but also opens up new possibilities for us to deliver exceptional value to our clients.

By combining our strengths with Softcats industry-leading expertise, we can offer comprehensive and cutting-edge solutions that drive efficiency and innovation for our customers.

We look forward to the tremendous opportunities this collaboration will bring and are excited about the positive impact it will have on our clients success.

Continued here:
Softcat partners with AI specialists Sysgroup on machine learning - City A.M.

Read More..

Dell Technologies’ Ed Hicks: Federated Learning Could Help Agencies Advance AI at the Edge – GovCon Wire

Ed Hicks, business development manager for federal and artificial intelligence at Dell Technologies (NYSE: DELL), said government agencies that intend to implement AI at the edge should consider adopting federated learning to quickly glean insights from data while ensuring the security of critical data.

In an article published on Carahsoft.com, Hicks noted that federated learning could enable agencies to leverage larger datasets at a decreased bandwidth.

Federated models require significantly less bandwidth than other models because the information isnt being sent back to a data center for processing, he wrote. If an agency has a rich dataset in the cloud and a small amount of compute at the edge, it can use federated learning to train the edge device without having to move all the data from the cloud.

Hicks cited some of the key considerations for agencies that plan to apply AI at the edge, including the need to identify the end goal of the AI or a machine learning model.

Other key questions include how much data an agency is trying to process and how quickly it needs the results, he added.

The Dell Technologies executive discussed how the company works to help agencies that intend to incorporate AI at the edge, such as creating a roadmap for AI implementation, providing federated learning and analytics tools to agencies and finding ways for agencies to make the most of their data.

We also provided a validated, containerized solution that agencies can use to quickly and easily deploy a federated learning solution in a Kubernetes environment, Hicks added.

Read the original post:
Dell Technologies' Ed Hicks: Federated Learning Could Help Agencies Advance AI at the Edge - GovCon Wire

Read More..

Airbnb using machine learning technology to prevent parties – KYW

PHILADELPHIA (KYW Newsradio) With the help of machine learning technology, Airbnb says it will be cracking down on parties this summer.

Its really important that those spaces are respected and treated with care, and that, you know, people are not showing up and taking advantage of that, said Airbnbs Global Director of Corporate and Policy Communications Christopher Nulty.

The best part about staying in an Airbnb is often that you're staying in a neighborhood, and the only way to continue staying in a neighborhood is to be a good neighbor.

Nulty says the company will be using the technology to prevent any disruptive parties, paying close attention to bookings on Memorial Day, Fourth of July and Labor Day. It looks at how long guests are staying, past rental ratings, distance from home, and the number of guests.

So far, it has resulted in a 50% reduction in unauthorized parties. In 2023, more than 67,000 people across the U.S., including 950 in Philadelphia, were deterred from booking entire home listings over those weekends.

Those who are flagged, but arent actually planning on throwing a party, can call Airbnbs customer service line.

Read the original:
Airbnb using machine learning technology to prevent parties - KYW

Read More..

Bringing generative artificial intelligence to space – SpaceNews

TAMPA, Fla. Amazon Web Services is busy positioning its cloud infrastructure business to capitalize on the promise of generative artificial intelligence for transforming space and other industries.

More than 60% of the companys space and aerospace customers are already using some form of AI in their businesses, according to AWS director of aerospace and satellite Clint Crosier, up from single digits around three years ago.

Crosier predicts similar growth over the next few years in space for generative AI, which uses deep-learning models to answer questions or create content based on patterns detected in massive datasets, marking a major step up from traditional machine-learning algorithms.

Mathematical advances, an explosion in the amount of available data and cheaper and more efficient chips for processing it are a perfect storm for the rise of generative AI, he told SpaceNews in an interview, helping drive greater adoption of cloud-based applications.

In the last year, AWS has fundamentally reorganized itself internally so that we could put the right teams [and] organizational structure in place so that we can really double down on generative AI, he said.

He said AWS has created a generative AI for space cell of a handful of people to engage with cloud customers to help develop next-generation capabilities.

These efforts include a generative AI laboratory for customers to experiment with new ways of using these emerging capabilities.

Crosier sees three main areas for using generative AI in space: geospatial analytics, spacecraft design and constellation management.

Earth observation satellite operators such as BlackSky and Capella Space already use AI extensively to gain more insights into their geospatial data, but have not yet bridged into generative AI.

Its also early days in the manufacturing sector, but Crosier said engineers are experimenting with how a generative AI model fed with design parameters could produce new concepts by drawing from potentially overlooked data, such as from the automotive industry.

Whether youre designing a satellite, rocket or spacecraft, youre letting the generative AI go out and do that exploratory work around the globe with decades of data, he said, and then it will come back and bring you novel design concepts that nobody has envisioned before for your team to use as a baseline to start refining.

He said generative AI also has the potential to help operators manage increasingly crowded orbits by helping to simulate testing scenarios.

If I have a constellation of 600 satellites, I want to model how that constellation will behave under various design parameters, he said.

Well, I can get a model of two concepts, which leaves me woefully inadequate but it costs time and money to model them, or I can model an infinite number. Gen AI will tell me what are the top 25 cases I should model for my modeling simulation capability that will give me the best design optimization, and so were seeing it used that way.

AWS efforts to accelerate the adoption of these emerging computing capabilities also include scholarships and a commitment announced in November to provide free AI training for two million people worldwide before the end of 2025.

This article was updated May 28 to clarify that BlackSky and Capella Space have yet to integrate generative AI into their business, although they use AI extensively.

Go here to see the original:
Bringing generative artificial intelligence to space - SpaceNews

Read More..

Predicting multi-label emojis, emotions, and sentiments in code-mixed texts using an emojifying sentiments framework … – Nature.com

Extensive discussion of experiments, results, and analysis on our introduced dataset for the proposed method and existing state-of-the-art baselines are presented below.

The following baseline methods are compared to our proposed approach.

XLMR(^{[FT+LS+RF]})86: In this method, a pre-trained BERT (Bidirectional Encoder Representations from Transformers) model is fine-tuned (FT) to perform sentiment analysis. To reduce overfitting, the authors incorporated label smoothing (LS) and rule-based features (RF) such as negation handling and sentiment shift detection. This model is used for emoji, sentiment, and emotion analysis tasks.

Multilingual BERT (mBERT)87: The authors utilized a transformer-based language model called mBERT to learn contextual embeddings for words in multiple languages. mBERT was pre-trained on large amounts of monolingual and multilingual text data and fine-tuned on the SentiMix code-mixed dataset for sentiment detection and emotion recognition.

XLMR(^{MTL})87: The authors used XLM-R, a cross-lingual language model based on a transformer architecture that was pre-trained on a larger dataset including code-mixed text. XLM-R can encode and decode text in multiple languages and has achieved state-of-the-art results on various NLP tasks, including sentiment analysis and emotion recognition. They fine-tuned XLM-R on the SentiMix code-mixed dataset for sentiment detection and emotion recognition.

TL-XLMR(^{[LS]})6: To detect sentiment and recognize emotions in the SentiMix code-mixed dataset, the authors employed an end-to-end multitask framework based on a transformer architecture. They fine-tuned XLM-RoBERTa (XLMR), a pre-trained cross-lingual embedding model, with task-specific data to improve model efficiency through transfer learning.

TL-mBERT(^{[LS]})6: In this ablation experiment, the authors replaced the XLMR module with mBERT to investigate the significance of the sentence encoder in TL-XLMR(^{[LS]}). The model was fine-tuned on the SentiMix code-mixed dataset to perform sentiment detection and emotion recognition.

Our suggested model is put into practice using PyTorch, a well-liked Python deep-learning toolkit. We employ the F1-score (F1) as our evaluation metric for both emotion and sentiment prediction and for emoji we used Jaccord Index (JI), macro F1-score. We utilize Adam optimizer88 and do a grid search for 200 epochs to improve the model. We use Transformer Encoder with two layers our embedding size is 300 which we find empirically (checked for 100, 150, 200 and 300). The dropout rate is set at 0.5 while the learning rate is set at 0.05. The auto-latent encoders dimension was found to be 2048 using empirical techniques. The discriminator, ({mathcal {D}}), is composed of two fully connected layers, a ReLU layer. The learning rate is set to 1e-3, weight decay of 1e-4, and momentum of 0.3. By contrasting the F1 and accuracy scores with different baselines, the efficacy of our strategy is assessed. In the CM-RFT, the kernel is dynamically computed from the input using a fully connected layer. The kernel sizes are [3, 5, 7, 31*3], and each module has 4 heads (half the number of heads in the transformer base model).

For the emoji detection tasks, we consider the Jaccard Index (JI)89 and Hamming loss (HL)90 metrics to evaluate the performance of our proposed system. Additionally, we also report the micro-averaged F191 score and Accuracy values for the same (as shown in Table8). JI, HL, and micro-averaged F1 are popular choices to evaluate multi-label classification tasks. For the sentiment and emotion detection tasks (as shown in Tables9 and 10), we report the macro-averaged F1 score91 and accuracy values for our proposed model.

Micro-averaged F1 score: For multi-label classification tasks, the micro-averaged F1 score is a commonly used metric that computes the F1 score globally by counting the true positives (TP), false negatives (FN), and false positives (FP) across all labels. The formula for the micro-averaged F1 score is: (F1_{micro} = frac{2 * sum _{i=1}^n TP_i}{2 * sum _{i=1}^n TP_i + sum _{i=1}^n FP_i + sum _{i=1}^n FN_i})

Macro-averaged F1 score: The macro-averaged F1 score is another commonly used metric for multi-label classification tasks. It computes the F1 score for each label and then takes the average of these F1 scores. The formula for the macro-averaged F1 score is: (F1_{macro} = frac{1}{n} sum _{i=1}^n frac{2 * TP_i}{2 * TP_i + FP_i + FN_i})

Accuracy: Accuracy is a metric that measures the proportion of correctly classified labels to the total number of labels. The formula for accuracy is: (A = frac{sum _{i=1}^n TP_i}{sum _{i=1}^n TP_i + sum _{i=1}^n FP_i})

Hamming Loss: The Hamming loss measures the proportion of misclassified labels to the total number of labels. The formula for Hamming loss is: (HL = frac{1}{n} sum _{i=1}^n frac{xor(Y_i, hat{Y_i})}{m}) where n is the number of instances, m is the number of labels, (Y_i) is the true label vector for instance i, (hat{Y_i}) is the predicted label vector for instance i, and xor is the logical XOR operator.

Jaccard Index: The Jaccard Index measures the similarity between two sets by computing the ratio of the size of their intersection to the size of their union, and it is used to measure the similarity between the predicted and true label sets in multi-label classification. The formula for the Jaccard Index is: (JI = frac{1}{n} sum _{i=1}^n frac{|Y_i cap hat{Y_i}|}{|Y_i cup hat{Y_i}|}) where n is the number of instances, (Y_i) is the true label set for instance i, and (hat{Y_i}) is the predicted label set for instance i. The Jaccard similarity is computed as the size of the intersection of the predicted and true label sets divided by the size of their union. The resulting score ranges from 0 to 1, with 1 representing the perfect similarity between the predicted and true label sets.

Tables8, 9, and 10 present the performance of CM-T, CM-FT, and CM-RFT models for the emoji, sentiment, and emotion tasks in UTL, DTL, and TTL setups. These setups investigate the effectiveness of multi-task learning in improving overall system performance compared to single-task learning.

The results reported in Table8 are the performance metrics of three different models (CM-T, CM-FT, CM-RFT) trained on three different setups (uni-task learning, dual-task learning, and tri-task learning) for the task of emoji detection.

In the uni-task learning setup, where each task is solved individually, the performance of the CM-RFT model improves as more features are added. Specifically, the performance improves as we go from using only character embeddings to character embeddings + Elmo embeddings + TF-IDF. The F1 score increases from 0.59 to 0.64, the accuracy score from 0.62 to 0.67, while the hamming loss decrease from 0.15 to 0.13, and the Jaccard index increases from 0.52 to 0.56. These results suggest that using multiple features can improve the performance of the emoji detection task.

In the dual-task learning setup, where the emoji task is jointly learned with sentiment/emotion tasks are jointly learned, the performance of the CM-RFT model further improves compared to the uni-task learning setup. The improvement is more evident when the model is trained on Character embeddings + Elmo embeddings + TF-IDF features. The F1 score increases from 0.64 to 0.68, the accuracy score from 0.67 to 0.71, while the Hamming loss decrease from 0.13 to 0.07, and the Jaccard index increases from 0.56 to 0.61, respectively. These results suggest that training the model on multiple tasks can lead to further improvements in the performance of the emoji detection task.

In the tri-task learning setup, where sentiment, emotion, and emoji detection tasks are jointly learned, the performance of the CM-RFT model improves even further compared to the dual-task learning setup. The F1 score increases from 0.68 to 0.73, the accuracy score from 0.71 to 0.75, while the Hamming loss decrease from 0.07 to 0.054, and the Jaccard index increases from 0.61 to 0.69. These results suggest that joint learning of multiple tasks leads to significant improvements in the performance of the emoji detection task.

Overall, the results suggest that the performance of the emoji detection task can be improved by using multiple features and by training the model on multiple tasks. Additionally, the results suggest that sentiment and emotion have a significant impact on the performance of the emoji detection task as joint learning of these tasks leads to significant improvements in performance.

The sentiment classification task results are presented in Table9 for the joint learning of emotion and emoji tasks. In the uni-task setup, where each task is performed independently, the CM-RFT model achieves the highest performance for the sentiment task with an F1 score of 72.65 and accuracy of 75.19. This suggests that including extra features, such as Elmo embeddings and TF-IDF features, can enhance sentiment detection performance across all models compared to those utilizing only character embedding features.

In the dual-task setup, when sentiment and emoji tasks are jointly learned, the F1 score and accuracy score of the sentiment detection task improve from 72.65 and 75.19, respectively, in the uni-task setup to 78.22 and 79.21, respectively, when using character embeddings, Elmo embeddings, and TF-IDF features. Similarly, when sentiment and emotion tasks are jointly learned, the F1 score and accuracy score of the sentiment detection task improve from 72.65 and 75.19, respectively, in the uni-task setup to 74.64 and 77.31, respectively, when using character embeddings, Elmo embeddings, and TF-IDF features.

In the tri-task setup, where sentiment, emotion, and emoji detection tasks are solved jointly, the CM-RFT model achieves the best performance for the sentiment task with an F1 score of 82.35 and accuracy of 83.14, followed by the CM-FT model with an F1 score of 75.42 and accuracy of 79.26. This again confirms that multitask learning helps to improve sentiment detection performance when it is learned jointly with other tasks.

The findings indicate that integrating emotion and emoji detection tasks into the sentiment classification task can enhance the models performance. The tri-task learning setup demonstrated the highest performance for the sentiment task, implying that incorporating these extra tasks can improve the models comprehension of the sentiment expressed in text. The enhanced performance is likely due to the additional contextual information that emotions and emojis provide, particularly in cases where the sentiment is complicated or sarcastic. Therefore, incorporating emotion and emoji detection tasks could be a useful technique for enhancing the performance of sentiment classification models. Moreover, incorporating additional features, such as Elmo embeddings and TF-IDF features, can also improve the sentiment detection performance.

According to the results presented in Table10, we can observe that the performance of the emotion task increases as we transition from single-task learning to dual-task and eventually to tri-task learning. In the single-task setup, the CM-RFT model outperforms the CM-T and CM-FT models across all three feature combinations, indicating that incorporating sentiment and emoji information can enhance the emotion detection tasks performance. In the dual-task setup with emoji, the performance of all models is considerably lower than in the single-task setup. However, the performance improves as more features are incorporated, and the CM-RFT model achieves the best results with all three features. This suggests that utilizing various feature types can benefit joint learning of emoji and emotion detection, and the tri-task setup may provide further improvement. In the dual-task setup with the sentiment, the performance is better than with emoji. The addition of Elmo embeddings and TF-IDF features leads to consistent performance improvement, with the CM-RFT model again achieving the best results. This implies that joint learning of sentiment and emotion detection can also benefit from the use of multiple feature types.

The presence of sentiment and emoji information appears to enhance the emotion tasks performance, as suggested by the results. The best performance for the emotion task was obtained in the tri-task learning setup, which involved jointly learning sentiment, emotion, and emoji detection tasks. The improvement in performance can be attributed to the fact that sentiment and emoji provide additional contextual information that can help in better disambiguation of emotions.

The results also suggest that multitask learning is more effective than single-task learning, especially when the tasks are related, such as emotion, sentiment, and emoji detection. The emotion tasks performance improved consistently as we progressed from single-task to dual-task and finally to tri-task learning. This indicates that joint learning of related tasks can better utilize the available information and improve the overall performance of the system.

The presented results in Table11 indicate that the CM-RFT model proposed in this study performs better than the state-of-the-art models for both sentiment and emoji detection tasks. In the single-task scenario, mBERT achieved the highest accuracy of 63.77% and an F1 score of 61.54% for the emoji detection task. However, in the multi-task setting, the proposed CM-RFT model surpasses all other models, achieving an accuracy of 75.81% and an F1 score of 73.25%. This shows that the proposed model effectively uses multi-task learning to improve the performance of both tasks. Moreover, the model also shows promising results for the unsupervised emotion detection task, with an F1 score of 60.53% and an accuracy of 63.73%. This demonstrates that the zero-shot approach utilized in the proposed model is effective in detecting emotions from the text even without labeled data.

When focusing on the emoji prediction task, the proposed CM-RFT model outperforms both single-task and multi-task models significantly. The model achieves an accuracy of 75.81%, which is approximately 12% higher than the accuracy of the best-performing single-task model (mBERT) and approximately 9% higher than the accuracy of the best-performing multi-task model (TL-XLMR(^{[LS]})). Moreover, the models F1 score is 73.25%, which is approximately 12% higher than the F1 score of the best-performing single-task model (mBERT) and approximately 8% higher than the F1 score of the best-performing multi-task model (TL-XLMR(^{[LS]]})).

We conducted additional experiments with our proposed model to compare it fairly with the single- and multi-task baselines discussed earlier. As none of the baseline models addressed unsupervised classification, they couldnt generate scores for the emotion task, unlike our proposed CM-RFT model that solves sentiment and multi-label emoji detection in a supervised setting and emotion detection in an unsupervised setting using a zero-shot approach. Therefore, we trained two versions of the CM-RFT model: one in a single-task setting (CM-RFT(^{STL}) (_{[-Emo]})) for all tasks and another in a multitask setting (CM-RFT(^{MTL}) (_{[-Emo]})) without the emotion task. The results are presented in Table11.

Comparing the performance of CM-RFT(^{STL}) (_{[-Emo]}) with single-task models XLMR, XLMR(^{[FT+LS+RF]}), mBERT, we observe that STL-CM-RFT outperforms all these models in terms of accuracy and F1 scores for the emoji and sentiment tasks. For example, the accuracy of CM-RFT(^{STL}) (_{[-Emo]}) is 67.30% for the emoji task, while the highest accuracy achieved by single-task models is 63.77% by mBERT. Similarly, CM-RFT(^{STL}) (_{[-Emo]}) achieves an F1 score of 74.64% for sentiment detection, while the highest F1 score achieved by single-task models is 70.32% by mBERT. These results indicate that the inclusion of the unsupervised emotion task has indeed helped the model perform better on supervised tasks.

Comparing the performance of CM-RFT(^{MTL}) (_{[-Emo]}) with multi-task models MT-XLMR, TL-XLMR(^{[LS]}), TL-mBERT[LS], we observe that CM-RFT(^{MTL}) (_{[-Emo]}) outperforms all these models in terms of accuracy and F1 scores for both emoji and sentiment tasks. For example, the accuracy of CM-RFT(^{MTL}) (_{[-Emo]}) is 71.68% for the emoji task, while the highest accuracy achieved by multi-task models is 66.83% by TL-XLMR(^{[LS]}). Similarly, MT-CM-RFT achieves an F1 score of 78.22% for sentiment detection, while the highest F1 score achieved by multi-task models is 72.58% by MT-XLMR. These results indicate that the inclusion of the unsupervised emotion task has indeed helped the model perform better in both single-task and multi-task settings.

We evaluate the performance of Llama model on the emotion recognition task by fine-tuning it for three epochs. Our model yielded an F1 score of 60.53 for emotion recognition which positions closely alongside the Llama model, which achieved an F1 score of 61.11. These results underscore the effectiveness of our proposed approach in tackling emotion recognition tasks, indicating its potential for practical applications in natural language processing.

To sum up, the CM-RFT model we proposed outperforms the current state-of-the-art models in both sentiment and emoji detection tasks. Our results indicate that taking advantage of multi-task learning and utilizing a zero-shot approach for unsupervised emotion detection can lead to substantial improvements in task performance. For the emoji prediction task, our proposed model achieves a remarkable improvement over the best-performing single-task and multi-task models, demonstrating the efficacy of our approach.

To assess the effectiveness of our model, we conducted comparisons with several papers and their corresponding models.

Comparison Study 1: Emotion Detection in Code-Mixed Roman Urdu - English Text51. Models: We compared our model with BERT and XLM-RoBERTa. Dataset Used: We used the Code-Mixed Roman Urdu - English Text dataset. The results, as shown in Table12, indicate that our model outperforms both BERT and XLM-RoBERTa with an F1 score of 0.69, demonstrating its effectiveness in detecting emotions in code-mixed text.

Comparison Study 2: A self-attention hybrid emoji prediction model for code-mixed language92 Models: We compared our model with BARF. Dataset Used: We used the Hinglish Emoji Prediction (HEP) dataset. The results, as presented in Table13, indicate that our model achieves a higher F1 score of 0.64 compared to BARF, demonstrating its superior performance in predicting emojis in code-mixed language.

Comparison Study 3: Multitasking of sentiment detection and emotion recognition in code-mixed Hinglish data6 Models: We compared our model with TL-XLMR(^{MTL}_{LS}). Dataset Used: We used the SemEval-2020 Task 9 dataset93. Table14 displays the results, showing that our model achieves higher F1 scores for both emotion detection (76.22) and sentiment analysis (70.31) compared to TL-XLMR(^{MTL}_{LS}), indicating its effectiveness in multitasking for sentiment and emotion recognition in code-mixed Hinglish data.

Table15 shows the results of four ablation experiments aimed at evaluating the contribution of different components in the proposed CM-RFT framework. The four components examined are the GLU module, the auto-encoder and ANP module, the self-attention mechanism, and the collective combination of GLU, self-attention, ANP, and AE modules.

The results indicate that each component contributes to the overall performance of the CM-RFT framework. Removing any of these components leads to a significant decline in F1 scores for all three tasks, especially when all four modules are removed (row 4). This suggests that the proposed framework is well-designed, and each module plays a critical role in its success. Specifically, the GLU module seems to be a crucial part of the framework (row 1). The removal of this component leads to a significant decrease in performance across all three tasks, highlighting the importance of non-linear transformations in the text encoder. Similarly, removing the auto-encoder and ANP module leads to a drop in performance (row 2), indicating the importance of these unsupervised pre-training methods in learning useful feature representations. Moreover, the self-attention mechanism appears to be more effective than linear concatenation in fusing the output features of the GLU and Trans Encoder modules (row 3). This result confirms the superior performance of self-attention in capturing long-range dependencies and modeling interactions among input tokens. Finally, the collective combination of GLU, SA, ANP, and AE modules is a highly effective feature learning mechanism (row 4), but it also leads to higher computational costs. The result suggests that one can still achieve decent performance with a simpler linear concatenation mechanism, albeit at the cost of reduced model capacity and expressive power.

In summary, the ablation experiments demonstrate the importance of each module in the proposed CM-RFT framework for multi-label emoji prediction. The findings can guide the design of future models and shed light on the underlying mechanisms that contribute to their success.

Table16 shows the results of four ablation experiments where each experiment is compared to the proposed CM-RFT containing all three loss functions (({mathcal {L}}_{ad}), ({mathcal {L}}_{re}), and ({mathcal {L}}_{al})) for the emoji, emotion, and sentiment tasks.

The F1 scores for all three tasks consistently decrease in each ablation experiment when any of the loss functions are removed. The largest decrease in performance is observed when all three loss functions are removed, indicating that each loss function plays an important role in the models performance. Specifically, removing the ({mathcal {L}}_{ad}) and ({mathcal {L}}_{re}) loss functions has the greatest negative impact on the models performance compared to removing only one of these loss functions. This suggests that these loss functions contribute significantly to the models ability to capture relevant features for both the adversarial training and reconstruction of the input data.

In terms of the contributions of the individual loss functions, the adversarial loss (({mathcal {L}}_{ad})) appears to have a slightly larger impact on performance compared to the alignment loss (({mathcal {L}}_{al})) and reconstruction loss (({mathcal {L}}_{re})), especially for the emoji and emotion detection tasks. This indicates that adversarial loss plays an important role in the models ability to distinguish between different classes for these tasks. On the other hand, the alignment loss and reconstruction loss appear to be more important for sentiment detection.

Overall, these results demonstrate the importance of the proposed loss functions for effective training of the multitask emoji, emotion, and sentiment detection system. These findings can be used to guide the development of more effective training strategies for multitasking learning models in the future. For example, incorporating additional loss functions or modifying the weighting of existing loss functions may improve the models performance. Additionally, these results suggest that the importance of different loss functions may vary depending on the specific tasks being performed and the data being used, highlighting the importance of careful analysis and selection of loss functions in the design of multitask learning models.

In this section, we provide a qualitative analysis of our proposed multitask framework, which takes into account the relationship between emoji, sentiment, and emotion, as we previously mentioned. To illustrate the impact of these tasks on one another, we have selected several examples from the SENTIMOJI dataset and present them in Table17.

Observation 1: In the first sentence, the model correctly predicts a heart emoji, positive sentiment, and joy as the emotion. The model seems to have picked up on the positive sentiment and joy from the words too good and dont know respectively, and predicted the heart emoji to match the positive sentiment. Moreover, the word bhai (brother) may imply a friendly or affectionate tone, leading to the identification of the heart emoji. Finally, the presence of the word joy or similar words in the training data might have helped the model to identify the emotion accurately.

Observation 2: In the second sentence, the model correctly predicts the negative sentiment, but the predicted emoji is wrong. The model predicted a pouting face instead of an angry face, which could be because the pouting face emoji can also indicate dissatisfaction or annoyance, which might be related to pride. Additionally, the emotion is misclassified as disgust instead of anger, which could be because of the strong negative sentiment and the use of words like failure and cant do this.

Observation 3: In the third sentence, the model correctly predicts the Face With Open Mouth, Throwing Up emoji, indicating disgust, along with the negative sentiment. The sentence contains words like missing, which suggests a negative sentiment, and the use of the Face With Open Mouth, Throwing Up emoji, and disgust emotion can be related to the revulsion expressed in the sentence.

Observation 4: In the first multi-label sentence, the model correctly predicts the negative sentiment and joy as the emotion, but only partially predicts the emojis. The use of hardik subhkamnaye and Congratulations sir ji in the sentence indicates a positive sentiment and the use of Dobara pm banee suggests a sense of achievement, which could explain the use of the heart and sparkles emojis. The misclassification of the smiling face emoji could be due to the lack of contextual information or insufficient training data.

Observation 5: In the second multi-label sentence, the model correctly predicts the negative sentiment but misclassifies the emotion as disgust instead of anger. For the emojis, the model predicted pouting face, crying face, and dissapointed face, but the original annotations have pouting face, angry face, and Face With Open Mouth, Throwing Up. This could be because the model picked up on the negative sentiment and the use of words like respect, anything, and woman, which might have led to the prediction of the pouting face emoji, while the crying face and dissapointed face emojis could be related to the negative sentiment.

Observation 6: In the third multi-label sentence, the model correctly identifies the sentiment as negative but wrongly predicts the emotion as anger instead of sad. The model also partially predicts the emojis, which may be due to the presence of multiple emotions in the sentence. To improve the prediction, the model could be trained on more data that contains similar phrases and words to better distinguish between different negative emotions and emojis.

The analysis of the incorrect predictions revealed several common error patterns, which are summarized below:

Ambiguity in Emoji Interpretation: The model often struggles with emojis that have multiple interpretations depending on the context. For example, the emoji can represent both laughter and tears of joy, leading to misclassifications.

Negation and Sarcasm: Negation and sarcasm in text can lead to misinterpretations by the model, especially in sentiment analysis. For instance, the phrase not bad may be interpreted as positive by the model, leading to misclassification.

Lack of Context: The model sometimes fails to capture the context of a sentence, leading to errors in sentiment and emotion classification. For example, short or contextually ambiguous sentences may be misclassified.

Data Imbalance: Imbalance in the distribution of classes can lead to biases in the models predictions, especially for minority classes. This is particularly evident in emotion classification, where some classes have fewer examples than others.

Out-of-Vocabulary Words: The presence of out-of-vocabulary words in the text can lead to errors, especially when the model is unable to capture their semantics. This is more common in emoji and sentiment analysis tasks.

These error patterns highlight the challenges faced by the proposed CM-RFT model in understanding and interpreting text across different tasks. Addressing these challenges requires further research into more robust modeling techniques, better handling of context and ambiguity, and mitigation of biases in the data.

The joint learning of sentiment and emotion tasks with the emoji prediction task may have benefited the performance of the emoji task. This is because emotions and sentiments can provide additional context for the model to predict the appropriate emojis. For example, in the first correct prediction sample, the model was able to correctly predict the heart emoji, which may have been influenced by the positive sentiment and joyful emotion predicted for the sentence. Similarly, in the second incorrect prediction sample, the model correctly predicted the negative sentiment but misclassified the emotion and emoji, suggesting that it may not have fully captured the nuances of the text.

Single-label emojis can be a risk in multilabel emoji prediction because the emojis can have different meanings in different contexts, and a single emoji may not be able to capture all the nuances of the text. For example, the pouting face emoji can be used to express anger, disappointment, or sadness, and without additional context, it can be difficult to determine the exact emotion being conveyed. We observe in the incorrect prediction samples, that the model has predicted some of the emojis correctly while missing some. It is better than having fully incorrect predictions because it shows that the model has some understanding of the context and can predict the relevant emojis to some extent. However, there is still room for improvement in the models performance.

To improve the models predictions, we can consider the following steps:

Increase the training data: The model might benefit from additional training data to capture the various nuances of language and emotions.

Incorporate context: The model might benefit from incorporating the context of the sentence to better identify the sentiment, emoji, and emotion.

Use pre-trained language models: The model might benefit from using pre-trained language models that can capture the semantic meaning of words and phrases.

Regularize the model: The model might benefit from regularization techniques to prevent overfitting and improve generalization.

Analyze and correct errors: Analyzing the models errors and correcting them might help improve the models performance over time.

We perform a study using ChatGPT(https://chat.openai.com/) to demonstrate the effectiveness of our proposed framework. We notice that CM-RFT has an overwhelming performance advantage over ChatGPT. A few sample predictions from ChatGPT on the TASKS task are shown below:

Prompt: Read these hinglish utterances and find the suitable emojis, emotion, and sentiment:

tere liye chand nhi la sakta baby actually tu bhaad mein ja

Tere ghamand k karan hi aaj congress k ye halat hai ... failure hai tu Bhai .. Tujhse na ho payega

Congress ki sarker mai cylinder he gayab ho gaya tha

Human Annotators:

Emoji Label: , ,

Emotion Label: Anger, Anger, Disgust.

Sentiment Label: Negative, Negative, Negative

Proposed_MODEL:

Emoji Label: , ,

Emotion Label: Anger, Disgust, Disgust.

Sentiment Label: Negative, Negative, Negative

ChatGPT:

Emoji Label: , ,

Emotion Label: Dismissive, Anger, Confusion.

Sentiment Label: Negative, Negative, Neutral (depending on the context, it could be interpreted as negative)

In our analysis, it is evident that our model yields results akin to ChatGPT. While ChatGPT is renowned for its high performance, our model demonstrates proficiency, particularly in handling codemixed sentences.

While our proposed CM-RFT model demonstrates strong performance across multiple tasks, there are several limitations and potential biases that need to be addressed:

Data Bias: The performance of the model heavily relies on the quality and representativeness of the training data. Biases present in the training data, such as underrepresentation of certain demographics or topics, can lead to biased predictions by the model.

Language Bias: The models performance may vary across different languages due to differences in linguistic structures, cultural nuances, and availability of training data. It may perform better on languages that are well-represented in the training data compared to those that are not.

Context Sensitivity: The models performance is influenced by the context in which the text is presented. It may struggle with contextually ambiguous or sarcastic text, leading to misinterpretations.

Generalization: The models ability to generalize to unseen data or domains is limited by the diversity and representativeness of the training data. It may perform well on data similar to the training data but struggle with out-of-domain or adversarial examples.

Interpretability: The complex architecture of the proposed CM-RFT model may hinder its interpretability, making it challenging to understand how and why certain predictions are made. This lack of interpretability can limit the models usefulness in real-world applications where transparency and accountability are important.

Addressing these limitations and biases requires careful consideration of model design, training data, evaluation metrics, and ethical considerations. Future research should focus on developing more robust and fair AI models that are capable of handling diverse languages, cultures, and contexts while ensuring transparency, interpretability, and accountability. Additionally, efforts should be made to collect more diverse and representative training data and to develop evaluation metrics that account for biases and fairness concerns. By addressing these challenges, we can build AI models that are more reliable, equitable, and trustworthy for real-world applications.

Continued here:
Predicting multi-label emojis, emotions, and sentiments in code-mixed texts using an emojifying sentiments framework ... - Nature.com

Read More..

Nebraska firm recommended to manage computer science ed stipends – The Union Leader

State Alabama Alaska Arizona Arkansas California Colorado Connecticut Delaware Florida Georgia Hawaii Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Washington Washington D.C. West Virginia Wisconsin Wyoming Puerto Rico US Virgin Islands Armed Forces Americas Armed Forces Pacific Armed Forces Europe Northern Mariana Islands Marshall Islands American Samoa Federated States of Micronesia Guam Palau Alberta, Canada British Columbia, Canada Manitoba, Canada New Brunswick, Canada Newfoundland, Canada Nova Scotia, Canada Northwest Territories, Canada Nunavut, Canada Ontario, Canada Prince Edward Island, Canada Quebec, Canada Saskatchewan, Canada Yukon Territory, Canada

Zip Code

Country United States of America US Virgin Islands United States Minor Outlying Islands Canada Mexico, United Mexican States Bahamas, Commonwealth of the Cuba, Republic of Dominican Republic Haiti, Republic of Jamaica Afghanistan Albania, People's Socialist Republic of Algeria, People's Democratic Republic of American Samoa Andorra, Principality of Angola, Republic of Anguilla Antarctica (the territory South of 60 deg S) Antigua and Barbuda Argentina, Argentine Republic Armenia Aruba Australia, Commonwealth of Austria, Republic of Azerbaijan, Republic of Bahrain, Kingdom of Bangladesh, People's Republic of Barbados Belarus Belgium, Kingdom of Belize Benin, People's Republic of Bermuda Bhutan, Kingdom of Bolivia, Republic of Bosnia and Herzegovina Botswana, Republic of Bouvet Island (Bouvetoya) Brazil, Federative Republic of British Indian Ocean Territory (Chagos Archipelago) British Virgin Islands Brunei Darussalam Bulgaria, People's Republic of Burkina Faso Burundi, Republic of Cambodia, Kingdom of Cameroon, United Republic of Cape Verde, Republic of Cayman Islands Central African Republic Chad, Republic of Chile, Republic of China, People's Republic of Christmas Island Cocos (Keeling) Islands Colombia, Republic of Comoros, Union of the Congo, Democratic Republic of Congo, People's Republic of Cook Islands Costa Rica, Republic of Cote D'Ivoire, Ivory Coast, Republic of the Cyprus, Republic of Czech Republic Denmark, Kingdom of Djibouti, Republic of Dominica, Commonwealth of Ecuador, Republic of Egypt, Arab Republic of El Salvador, Republic of Equatorial Guinea, Republic of Eritrea Estonia Ethiopia Faeroe Islands Falkland Islands (Malvinas) Fiji, Republic of the Fiji Islands Finland, Republic of France, French Republic French Guiana French Polynesia French Southern Territories Gabon, Gabonese Republic Gambia, Republic of the Georgia Germany Ghana, Republic of Gibraltar Greece, Hellenic Republic Greenland Grenada Guadaloupe Guam Guatemala, Republic of Guinea, Revolutionary People's Rep'c of Guinea-Bissau, Republic of Guyana, Republic of Heard and McDonald Islands Holy See (Vatican City State) Honduras, Republic of Hong Kong, Special Administrative Region of China Hrvatska (Croatia) Hungary, Hungarian People's Republic Iceland, Republic of India, Republic of Indonesia, Republic of Iran, Islamic Republic of Iraq, Republic of Ireland Israel, State of Italy, Italian Republic Japan Jordan, Hashemite Kingdom of Kazakhstan, Republic of Kenya, Republic of Kiribati, Republic of Korea, Democratic People's Republic of Korea, Republic of Kuwait, State of Kyrgyz Republic Lao People's Democratic Republic Latvia Lebanon, Lebanese Republic Lesotho, Kingdom of Liberia, Republic of Libyan Arab Jamahiriya Liechtenstein, Principality of Lithuania Luxembourg, Grand Duchy of Macao, Special Administrative Region of China Macedonia, the former Yugoslav Republic of Madagascar, Republic of Malawi, Republic of Malaysia Maldives, Republic of Mali, Republic of Malta, Republic of Marshall Islands Martinique Mauritania, Islamic Republic of Mauritius Mayotte Micronesia, Federated States of Moldova, Republic of Monaco, Principality of Mongolia, Mongolian People's Republic Montserrat Morocco, Kingdom of Mozambique, People's Republic of Myanmar Namibia Nauru, Republic of Nepal, Kingdom of Netherlands Antilles Netherlands, Kingdom of the New Caledonia New Zealand Nicaragua, Republic of Niger, Republic of the Nigeria, Federal Republic of Niue, Republic of Norfolk Island Northern Mariana Islands Norway, Kingdom of Oman, Sultanate of Pakistan, Islamic Republic of Palau Palestinian Territory, Occupied Panama, Republic of Papua New Guinea Paraguay, Republic of Peru, Republic of Philippines, Republic of the Pitcairn Island Poland, Polish People's Republic Portugal, Portuguese Republic Puerto Rico Qatar, State of Reunion Romania, Socialist Republic of Russian Federation Rwanda, Rwandese Republic Samoa, Independent State of San Marino, Republic of Sao Tome and Principe, Democratic Republic of Saudi Arabia, Kingdom of Senegal, Republic of Serbia and Montenegro Seychelles, Republic of Sierra Leone, Republic of Singapore, Republic of Slovakia (Slovak Republic) Slovenia Solomon Islands Somalia, Somali Republic South Africa, Republic of South Georgia and the South Sandwich Islands Spain, Spanish State Sri Lanka, Democratic Socialist Republic of St. Helena St. Kitts and Nevis St. Lucia St. Pierre and Miquelon St. Vincent and the Grenadines Sudan, Democratic Republic of the Suriname, Republic of Svalbard & Jan Mayen Islands Swaziland, Kingdom of Sweden, Kingdom of Switzerland, Swiss Confederation Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand, Kingdom of Timor-Leste, Democratic Republic of Togo, Togolese Republic Tokelau (Tokelau Islands) Tonga, Kingdom of Trinidad and Tobago, Republic of Tunisia, Republic of Turkey, Republic of Turkmenistan Turks and Caicos Islands Tuvalu Uganda, Republic of Ukraine United Arab Emirates United Kingdom of Great Britain & N. Ireland Uruguay, Eastern Republic of Uzbekistan Vanuatu Venezuela, Bolivarian Republic of Viet Nam, Socialist Republic of Wallis and Futuna Islands Western Sahara Yemen Zambia, Republic of Zimbabwe

See the rest here:

Nebraska firm recommended to manage computer science ed stipends - The Union Leader

Read More..

Assistant professor in computer science and software engineering elevated to Senior Member status by IEEE – Auburn Engineering

Sathyanarayanan Sathya Aakur, an assistant professor in the Department of Computer Science and Software Engineering (CSSE), has been elevated to Institute of Electrical and Electronics Engineers (IEEE) Senior Member.

This is an exclusive level achieved by only about 10 percent of IEEEs more than 450,000 members, said CSSE Chair Hari Narayanan. Earning this level is the highest professional grade of IEEE and requires extensive experience and reflects professional accomplishment and maturity.

Senior members are chosen based on professional service, leadership and significant contributions to the field and research. Aakur, who has served as an area chair and associate editor with several IEEE conferences and journals for the past five years, focuses his research interests on computer vision, natural language processing and visual understanding.

A 2022 National Science Foundation CAREER Award winner, Aakurs recent works were accepted at the IEEEs International Symposium on Biomedical Imaging and in the Journal of Biomedical and Health Informatics.

Im extremely honored to be elevated to senior member, said Aakur, who joined the Auburn Engineering faculty in Fall 2023. Researchers need to be IEEE senior members for five years to apply for IEEE fellowship positions the highest honor IEEE awards.

Earning this distinction opens further opportunities within the IEEE for me. I have been involved in organizing conferences, recently co-chaired the demo track at the Conference on Computer Vision and Pattern Recognition 2024. Being a senior member will allow me to explore leadership opportunities within IEEE and help Auburn continue to develop a culture of research and education in computer science.

Aakurs expertise in video understanding and computer vision for robotics comes in very handy, leading to publications and service in the computer vision and artificial intelligence research communities giving him the experience and prestige deserving of the IEEE senior member distinction.

We typically receive thousands of submissions to a conference, the 'around 30' papers that fall under my area are assigned to me and it's my job to make sure that each of these papers receive a fair peer review from at least three people, he said. Then I make recommendations to the program chairs, who administer the conference about the decisions on the papers that are in my batch with justification based on reviewer comments and intensive discussions.

Aakur encourages faculty peers to pursue opportunities within professional organizations. Why? Their work will be noticed.

Every day, dozens of new papers come out, he said. No matter how talented we are as researchers, our work gets lost in that. But these professional memberships allow us to showcase our work and bring more visibility to the department and the university.

Go here to read the rest:

Assistant professor in computer science and software engineering elevated to Senior Member status by IEEE - Auburn Engineering

Read More..

Detecting weeds using robotsand a cloud | Rowan Today | Rowan University – Rowan Today

Shen-Shyang Ho, Ph.D.

Computer scientist

Data mining, artificial intelligence, machine learning

Shen-Shyang Ho, Ph.D., likes to solve new problems. His work in computer science over the last 20 years has ranged from studying image data for anomalies in the manufacturing sector to analyzing satellite data tracking cyclones and hurricanes at NASAs Jet Propulsion Laboratory.

To me, the easiest problem to solve is an open one because theres no definitive answer, Ho says. No one has come up with an optimal solution yet.

Supported by the National Science Foundation and in collaboration with researchers at Stony Brook, Temple and Kettering universities, Hos lab conducts research to enable remote vehicles to make decisions efficiently. The method relies on a machine learning concept called cooperative inference.

In this solution, the deep learning model is split into two parts: one on a remote vehicle, such as a cell phone, drone or robot and the rest on a server or cloud. Powered by a limited energy source, such as a battery, the vehicle makes decisions with help from a cloud powered by a steady source of energy and minimal communication between the remote vehicle and server.

With Rowan mechanical engineering alumnus Paolo Rommel Sanchez, Ph.D., now a professor at the University of the Philippines Los Baos, Ho is testing a scenario involving precision agriculture, a farming method that uses technology to improve production results through targeted interventions.

The test uses a field robot to recognize weeds in a field of growing produce. Once weeds are identified via cooperative inference, the robot sprays herbicide on the weeds, not the crops.

More research is needed to optimize both the machine learning model (weed detection), as well as the amount of energy and time it takes for the robot to transmit and receive information from the field to the cloud where the more computationally expensive processes are executed.

The robot is far awayit is a battery-driven robot, explains Ho. We have to make sure that it can last long enough in the field. I think precision agriculture is a very realistic application for this AI technology because the robot is moving slowly, making decisions.

Additionally, Ho adds, robots handle chemicals. There is a benefit to the health of the farmer, he says. Let the robots do the dangerous stuff.

Rowan University researchers are passionate about what they do. Find more at Meet Our Researchers.

Follow this link:

Detecting weeds using robotsand a cloud | Rowan Today | Rowan University - Rowan Today

Read More..

From computer science to the alpine botanical garden – greenpeace.org

Growing up in Chinas southwestern Yunnan province, Haixian studied computer science in the early 2000s, but ended up working at the Shangri-La Alpine Botanical Garden after graduation. Speaking to researchers from Greenpeace East Asias Beijing office, she reflects on 20 years monitoring and restoring high-altitude biodiversity in the Hengduan Mountain range.

The hour-plus long interview has been edited for brevity.

Greenpeace: Why did you choose to work there? Why not a more typical job like in a factory?

Haixian: There are pretty much no factories here, and only a few service sector jobs, like hotels or restaurants. So there was an opportunity at the botanical garden, is all. There was no particular reason. Actually at the time I wasnt too familiar with the botanical garden. It was just another job.

Greenpeace: I know youre Nakhi [a minority nationality community of Indigenous People in the Hengduan Mountains]. Do Nakhi people have customs or traditions in forestry or botany?

Haixian: No special traditions. The older generations way of life was farming. My father was not Nakhi he is from Xinjiang. My mother is a real Nakhi. And I am Nakhi, but I dont know much about the specifics of Nakhi culture or customs. I am somewhat sinicized, and I cant read Naxi-Dongba characters, and I havent bothered to learn them.

GreenpeaceHow did you develop your skills in recognizing plant types?

Haixian: As of 2024 Ive been doing this for 20 years. Actually, I really have zero foundation. First I learned to identify the specimens we take. But really you cant identify plant type by using specimens, because the characteristics in shape and color change when the plant dries out. Due to the lack of equipment, we cant do identification by molecular testing. So we can only use the plants morphology. Studying this need to incorporate field work more. In field surveys, you see reds and greens that after drying will change. Even flowers will change color. So field survey and specimen identification need to be synchronized.

Greenpeace: You seem to enjoy it.

Haixian: Yes I really do. Otherwise I wouldnt have kept up with it for 20 years.

Greenpeace: When do you feel most happy doing this?

Haixian: Id rather be in the field, going out into the wilderness. Before I had children, I loved just going off into the countryside. When I was single, I didnt need to care about much and I was in good shape. Now Ive got to watch the kids. So thats that.

Greenpeace: Why do you like going into the wilderness? Its quite tiring for a lot of people, who maybe feel indoor work is more suitable.

Haixian: If youre physically exhausted, you can get over it with some rest. If youre in an office writing materials all day long, and cant get it done in a few hours, then what you need isnt rest. Field work is tough, and for physical labor you just need to rest to get better. Plus, you can travel around for free and see the scenery. Still, when its tough it is tough. If it rains or snows, or sometimes I still get lightheaded when the mountains are above 5,000 meters.

Greenpeace: When you do surveys, do you go by yourself? Or do some colleagues go with you?

Haixian: We have a team. Usually no less than 4 people. Two men and two women, which makes booking accommodation more convenient. Over the the last few years, the longest trip was a scientific expedition to the Qinghai-Tibet Plateau, going to Tibet and Sichuan for about 20 days. But the conditions were good. We didnt have to camp out in the field. When we work in northwest Yunnan, we have to camp. Weve been doing that program for about 20 years, since 2005 when I was new. We go up to 4,000 meters above sea level and need to stay in the mountains. There are three to four peaks at each site and we need to monitor the peaks and the plantlife there. We stay for five to six days.

Greenpeace: What is the significance of your restoration work?

Haixian: My work is quite crucial for ecological restoration. You first have to look at all the types of vegetation around the area, and see what plants are suitable for the area. People assume you can just bring in seedlings from outside, move them in, plant them, and watch them grow. But you have to restore it to a state close to original vegetation, but of course it wont be exactly the same. After surveying what is most suitable, we go back and pick seeds of fast-growing plants and sow them to first fix the soil. Pioneer plants grow fast and cover large areas.

Greenpeace: Can you give an example of a plant used for ecological restoration??

Haixian: Plants are sensitive to the altitude and the region. Restoration work will differ along with different altitudes and in different areas. There are two or three species that are quite good, but the main one is Evergreen laburnum (Piptanthus nepalensis, also known as Nepal laburnum). This species is suited to altitudes between 2,500 and 4,000 meters. Its evergreen and its leguminous, so its roots have a nitrogen-fixing effect on the soil. Its a shrub, so its fast-growing. So its a good candidate for seeding.

Greenpeace: Can you tell me about a time you encountered a more difficult restoration project?

Haixian: Ecological restoration at high altitudes is difficult, mainly due to environmental and climatic factors. The soil is not very good, and most areas are sand and gravel. Passion can overcome all these obstacles. Even if they cant pay our salaries, we persist.

Greenpeace: Whats the reason? Ive heard from Director Fang that in the early years, the botanical garden owed a lot of money and couldnt pay salaries.

Haixian: In 2008 or 2009, we didnt get wages for about eight months. I cant say I didnt think about quitting. But love is the main reason that I stuck with it. It is much better now. 15 years ago, there was so much construction, and I was doing more biodiversity surveys and environmental impact assessments. After 15 years, the construction is done, and now we definitely need more restoration work. In the early stage we do survey work for as commissioned by companies and the funding isnt much. Now that were doing the restoration theres more funding. The government now also attaches great importance to biodiversity. So in the past few years we have quite a lot of biodiversity survey projects.

Greenpeace: Can you share a project youre particularly proud of?

Haixian: The book Bare Land Vegetation Restoration Research in Northwest Yunnan. Not that I particularly felt fulfilled by publishing the book. But seeing it through from start to finish was fulfilling. I was involved in the initial botanical survey, the assessment of the restoration areas, and various investigations all the way until this book came out. The book started in 08 or 09 to survey the pioneer plants in northwest Yunnan.

Greenpeace: After working in the botanical gardens for many years, how has working on ecological restoration improved or tempered or changed you personally?

Haixian: Quite a big change. When I started doing it before, I felt it was an ordinary thing. But it doesnt feel ordinary, doing it. In the past I imagined like most that Id restore the slope, plant what looks good, you know, whatever to make the slope green. Looking back, I used to be so naive, even a little ridiculous. You have to do things in a professional way.

Greenpeace: And what are the differences between men and women when it comes to restoration work? What particular advantages do you think women have?

Haixian: Attentiveness in observation. Theres a lot of physical labor in ecological restoration, and then a lot of things that appear insignificant but are actually essential. When doing surveys, you have to see what species are growing and looking good. When monitoring, you have to look at both the data and the growth of the plants themselves.

Notes:

Read the rest here:

From computer science to the alpine botanical garden - greenpeace.org

Read More..

Stuart Gavidia 24 majored in computer science while interning at Amazon, Cannon, and Pierce County – Pacific Lutheran University

Dr. Caley was so proactive, and he gave me, and other students, opportunities. He gets you connected with the right people, Gavidia says. Dr. Lytle, though hes not in computer science, he was instrumental in helping me navigate college experiences and also having interesting discussions about the intersections of life science and technology. He opened up that pathway to me.

Gavidia also was part of the College of Natural Sciences Mentoring Program. Everyone should use that program. Those alumni are super motivated, and they answer any questions you have. It could be related to your major, or not, and you can just have good conversations with them.

Amazon has already offered him a software engineering position. Eventually, he wants to start his own software company after gaining more experience in the field.

Read this article:

Stuart Gavidia 24 majored in computer science while interning at Amazon, Cannon, and Pierce County - Pacific Lutheran University

Read More..