Pioneers AI
Our AI blog
Are you curious about the latest advancements in NLP and AI? Look no further! At our cutting-edge tech company, we’re shaping the world with innovative technologies. But don’t let these complex terms intimidate you! We’re on a mission to democratize them and make them easy to understand for everyone. That’s the goal of our blog: Pioneers AI
Ready to dive into the exciting world of NLP and data science with us? Keep reading!
Balancing performance of language models across languages and domains with efficient use of data
Large language models have seen huge success for variety of tasks. Generally, specific task heads are built on a base model or trained taken embeddings generated by a pre-trained model. At Zortify, we focus on multilingual language models which are able to process different languages at once. In contrast to monolingual models which can only process one single language once trained.
Existing multilingual models have demonstrated impressive cross-lingual abilities for many tasks such as zero-shot and few-shot classification. However, despite the abilities, imbalanced performance across different languages remains. Which can lead to unfairness for certain tasks when multilingual inputs are fed to the model and the outputs are compared. Essentially, the imbalance is caused by the various language availabilities and the specificity of data used during training. With efficient use of data and improved understanding of the behavior of multilingual models, we research and purpose new methodologies to provide comparable performance for small languages and large languages alike.
Domain-specific language models
In addition to multilingual models, we also develop domain-specific language models. There is similarity between improving the performance for small languages and the performance for specific domains, since they both suffer from the sparsity of data. Nonetheless, with domains, we can explore specific knowledge structures / representations to combine with linguistic information captured by the general language models. With this combination, we aim to offer better performance when domain-specific data is used and find flexible solutions that can enable us to transfer to specific domains easily.
With these research objectives, we are open to collaborate with research institutes and industry partners who are interested in the same topics or display needs for the success of this research for co-development. We believe this research will bring values to both academics and industry, and benefit a wide range of audience.
