Responsibilities
1. Big Data quality assessment at each stage of the language model training process to ensure that the large language model training data is valid 2. Competitive product analysis to confirm the direction of product improvement and ensure that the large language model is in a leading position in the industry 3. The effect of large model products after they are launched Tracking and case analysis of user feedback to ensure improved user experience 4. Collaborate with technical and annotation teams according to needs to promote the smooth implementation of needs 5. Develop product and project process mechanisms to coordinate the efficient work of various project roles.
Qualifications
1. Strong analytical and communication skills, good at discovering valuable product improvement suggestions from evaluation data, and able to promote implementation 2. Interested in the field of large language models, and willing to devote themselves to active exploration in the fields of large language models/AIGC 3. Have strong logical thinking, innovative spirit, and strong project management capabilities 4. Bonus points Items: Have technical background, evaluation and data analysis background in large language model-related fields, or have more than 2 years of experience working on content or policy security-related products.