Company Description

Re:Sources is the backbone of Publicis Groupe, the world’s largest communications group. Formed in 1998 as a small team to service a few Publicis Groupe firms, Re:Sources has grown to 4,000+ people servicing a global network of prestigious advertising, public relations, media, healthcare and digital marketing agencies. We provide technology solutions and business services including finance, accounting, legal, benefits, procurement, tax, real estate, treasury and risk management to help Publicis Groupe agencies do what they do best: create and innovate for their clients.   

In addition to providing essential, everyday services to our agencies, Re:Sources develops and implements platforms, applications and tools to enhance productivity, encourage collaboration and enable professional and personal development. We continually transform to keep pace with our ever-changing communications industry and thrive on a spirit of innovation felt around the globe. With our support, Publicis Groupe agencies continue to create and deliver award-winning campaigns for their clients.

Job Description

  • Play a critical role in the development and application of data science algorithms and advanced analytics techniques across a variety of use cases, including recommendation models, email personalization, and segmentation.
  • Build models over various datasets, to analyze importance/centrality (Page Rank,) , similarity (KNN, TF-IDF, etc.)
  • Design and analyze experiments across user experience on the application, as well as over email communications (A/B, multivariate)
  • Perform analyses of user data and provide feature teams with an understanding of how users are interacting with their product
  • Be able to take tasks that are at times, ambiguous and not so clearly defined, and find the specific requirements by communicating with the appropriate team leads
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Be able to clearly and concisely communicate to senior leadership the complexities of the models that have been built and how they will be used/impact end users.
  • Approach issues and new feature development with creative solutions
  • Be able to write code and properly manage versions and deployment across environments

Qualifications

Must have skills:

  • Strong experience working with data science/ML libraries in Python (SciPy, NumPy, TensorFlow, SciKit-Learn, etc.)
  • Strong experience working in cloud development environments (especially Azure, ADF, PySpark, Scala, DataBricks Delta, R, SQL)
  • Experience building data science models for use on front end, user facing applications, such as recommendation models
  • Experience with REST APIs, JSON, streaming datasets
  • Experience working with user behavioral data,such as web analytics (Google/Adobe Analytics)
  • Experience building ML models and pipelines using MLflow, AirFlow.
  • Experience with big data tools: Hadoop, Spark, Kafka, etc.
  • Experience with relational SQL and NoSQL databases.
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
  • Understanding of Graph data, neo4j is a plus
  • Strong understanding of RDBMS data structure, Azure Tables, Blob, and other data sources
  • Experience with test driven development
  • Experience in PowerBI or other tools
  • Understanding of Jenkins, CI/CD processes using Git, for cloud configs and standard code repositories such as ADF configs and Databricks

Location

Heredia, Costa Rica

Job Overview
Job Posted:
5 months ago
Job Expires:
Job Type
Full Time

Share This Job: