🎉 Gate.io Growth Points Lucky Draw Round 🔟 is Officially Live!
Draw Now 👉 https://www.gate.io/activities/creditprize?now_period=10
🌟 How to Earn Growth Points for the Draw?
1️⃣ Enter 'Post', and tap the points icon next to your avatar to enter 'Community Center'.
2️⃣ Complete tasks like post, comment, and like to earn Growth Points.
🎁 Every 300 Growth Points to draw 1 chance, win MacBook Air, Gate x Inter Milan Football, Futures Voucher, Points, and more amazing prizes!
⏰ Ends on May 4, 16:00 PM (UTC)
Details: https://www.gate.io/announcements/article/44619
#GrowthPoints#
Ali large model is open source again! Able to read pictures and know objects, based on Tongyi Qianwen 7B, commercially available
Source: Qubit
Following Tongyi Qianwen-7B (Qwen-7B), Alibaba Cloud launched the large-scale visual language model Qwen-VL, and it will be directly open sourced as soon as it goes online.
For example 🌰, we input a picture of Arnia, through the form of question and answer, Qwen-VL-Chat can not only summarize the content of the picture, but also locate the Arnia in the picture.
The first general model that supports Chinese open domain positioning
Let’s take a look at the characteristics of the Qwen-VL series models as a whole:
In terms of scenarios, Qwen-VL can be used in scenarios such as knowledge question answering, image question answering, document question answering, and fine-grained visual positioning.
For example, if a foreign friend who cannot understand Chinese goes to the hospital to see a doctor, facing the guide map with one head and two big ones, and does not know how to get to the corresponding department, he can directly throw the map and questions to Qwen-VL, and let it follow the Image information acts as a translator.
In terms of visual positioning ability, even if the picture is very complicated and there are many characters, Qwen-VL can accurately find Hulk and Spiderman according to the requirements.
The researchers tested Qwen-VL on standard English assessments in four categories of multimodal tasks (Zero-shot Caption/VQA/DocVQA/Grounding).
In addition, the researchers built a test set TouchStone based on the GPT-4 scoring mechanism.
If you are interested in Qwen-VL, there are demos on Modak Community and huggingface that you can try directly, and the link is at the end of the article~
Qwen-VL supports researchers and developers to carry out secondary development, and also allows commercial use, but it should be noted that for commercial use, you need to fill in the questionnaire application first.
Project link:
-Chat
Paper address: