Google Cloud Machine Learning Data Engineer

About the job

The Google Cloud Professional Machine Learning Engineer will work on cutting-edge products that lie at the intersection of machine learning, distributed computing, and DevOps. Are you a high performing data engineer or data scientist looking to take part in some of the most cutting-edge research and production projects?  Do you enjoy reading and investigating advancements in various applied machine learning architectures and solution white papers?  Would you like to take part or drive the creation of publishable advancements in machine learning across various disciplines?  You could be a great match for a Machine Learning Data Engineer role.

RMDA takes insights that are buried in data and provides businesses a clear way to transform how organizations consume not only their information, but also secure data clean room information that other entities have agreed to share.  Our mission is to assist leading companies in scaling their AI and analytics initiatives in a manner to which they may not be accustomed.

The position:

In this role as a Machine Learning Data Engineer, you will work on cutting-edge products that lie at the intersection of machine learning, distributed computing, and DevOps. In this role you will leverage technologies like the Google Cloud Platform, Kubernetes, Docker, TensorFlow, Spark, and Kafka to build a containerized platform for deploying distributed frameworks. The objective is to be able to handle the machine learning and big data infrastructure needs of an entire organization. You will also have opportunities to work in the consulting and research branches of the team.

This role will also provide technical subject matter expertise for RM Dayton Analytics sales and account teams during the scoping of new cloud data platform opportunities.

This is a remote position with ability to travel to client sites as needed open to any qualified applicant in the United States.  US work authorization that requires transfer of sponsorship or support for employment is not available for this position.

Practice – Cloud Data Platform


What will I be doing?

  • Act as an advisor to various lines of business to help create or improve projects.
  • Develop both deployment architecture and scripts for automated system deployment in GCP.
  • Code new machine learning paradigms, sometimes from first principles, for integration into production systems.
  • Learn and work with subject matter experts to create large scale deployments using newly researched methodologies.
  • Construct data staging layers and fast real-time systems to feed machine learning algorithms.


Key responsibilities to include:  

  • Excellent communication skills evidenced by multiple white papers (internal proprietary or externally published).
  • Demonstrated ability to build full stack systems architected for speed and distributed computing.
  • Demonstrated ability to quickly learn new tools and paradigms to deploy cutting edge solutions.
  • Experience mentoring junior engineers.
  • Adept at simultaneously working on multiple projects, meeting deadlines, and managing expectations


What are we looking for?

To fulfill this role successfully, you should demonstrate the following minimum qualifications:

  • 4 year College degree in Computer Science, Information Technology or equivalent demonstrated experience.   Masters degree preferred.
  • At least 2 years of experience designing and building full stack solutions utilizing distributed computing.
  • At least 2 years of experience working with Python, Scala, or Java.
  • At least 2 years of experience with distributed file systems or multi-node database paradigms.
  • Google Cloud Professional Machine Learning Engineer, Google Cloud Associate Cloud Engineer,  AWS Cloud Practitioner, or AWS Certified Machine Learning – Specialty – certifications.

It would be useful in this position for you to demonstrate the following capabilities and distinctions:

  • Master’s Degree or PhD
  • At least 2 years of experience deploying production applications to a cloud services provide, GCP preferred.
  • At least 2 years of experience with machine learning or deep learning frameworks, such as TensorFlow, PyTorch or H2O.
  • At least 2 years of experience with distributed data movement frameworks, such as Spark, Kafka, or Dask.
  • At least 3 years of experience with a container orchestration platform, such as Kubernetes.
  • At least 5 years of experience with CI/CD technologies, such as Ansible, Cloud Formation, or Jenkins.
  • At least 5 years of experience leading teams in code development


About RM Dayton Analytics

A little bit about us…

Founded in 2014, RM Dayton Analytics is a professional services company. We transform clients’ business, operating and technology models for the digital era.  As a Snowflake and Google Cloud Partners, our mission is to help enterprises accelerate innovation by harnessing the power of the Cloud Data Platform. Our associates provide superior domain and technology expertise to drive business outcomes in a converging world.  Our consultative approach helps our customers build more innovative and efficient businesses.

RM Dayton Analytics is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law.

Job Category: Cloud Data Platform
Job Location: United States

Apply for this position

Allowed Type(s): .pdf, .doc, .docx