What is Cobre and what do we do?
Cobre is a corporate treasury and payments platform designed to elevate the way Latin American companies manage their finances. At Cobre, we build CFO-tech on top of owned payment rails in order to give real-time capabilities to domestic and cross border payments, empowering every peso that our clients move.
We are committed to eliminating friction between companies, their banking partners, their people and their money– so that money movement becomes a growth driver, rather than a time-consuming task. In this pursuit, Cobre has grown to become a trusted provider of corporates and tech giants in the region, while building a profitable business that has grown 10x in the past 12 months.
What we are looking for:
As a Principal Data Scientist at Cobre, you will tackle complex business challenges by providing deep data insights and deploying advanced decision models within our core products and operational teams. Collaborating with cross-functional teams—including Product, Engineering, Risk, and Compliance—you will work to improve financial services across LATAM. Your focus will be on driving enterprise efficiency, optimizing money movement decisions for our clients, and delivering innovative solutions that enhance customer experiences.
What would you be doing:
Advanced Machine Learning Development: Design, build, and deploy machine learning models and advanced analytics to drive core business outcomes, such as enhancing money movement solutions, AML risk assessment across multiple countries, fraud detection, and cash flow optimization.
Retrieval-Augmented Generation RAG Develop and implement state-of-the-art models using Retrieval-Augmented Generation techniques to improve our AI capabilities in areas like customer support automation and intelligent data retrieval.
Product Collaboration: Partner with product managers and other stakeholders to shape the product roadmap and define project goals, ensuring alignment with business objectives.
End-to-End Model Lifecycle: Develop AI/ML models from offline training to online serving and monitoring, ensuring models are robust, scalable, and maintainable.
Talent mentorship: Mentor junior data scientists and provide technical guidance to elevate team capabilities, fostering a culture of continuous learning and innovation.
Insight Communication: Translate data-driven insights into actionable business recommendations. Communicate complex findings clearly and concisely to both technical and non-technical stakeholders, including senior leadership.
Infrastructure Enhancement: Develop and enhance the companyʼs data science infrastructure, ensuring efficient handling of large datasets and leveraging cloud-based technologies for scalability and performance.
What do you need:
Educational Background: Master's or Ph.D. in Statistics, Computer Science,
Mathematics, Engineering, or a related quantitative field.
Technical Proficiency: Strong coding skills in Python.
Experience with data infrastructure tools like Snowflake and Jupyter Notebooks.
Proficiency with standard ML libraries (e.g., scikit-learn, pandas, NumPy).
Experience with deep learning frameworks like TensorFlow and PyTorch,
specifically for training and fine-tuning large language models LLMs.
Expertise in Retrieval-Augmented Generation RAG Experience in developing and integrating retrieval systems with generative models to enhance AI capabilities.
Domain Expertise: Hands-on experience in developing models for market risk, fraud detection, and customer analytics, with a solid understanding of relevant financial regulations.