Creating and running a CI/CD pipeline for an app
Implementing Configuration Management using Ansible
Deploy a containerized application using Docker
Create an application with an API and deploy it to Kubernetes
Deploying AWS DevOps Infrastructure
Implementing Continuous Integration using Jenkins
Performing Orchestration using Kubernetes
Implementing Continuous Monitoring using Nagios
Performing Continuous monitoring using Prometheus
Managing source code using Git/Github
Building AWS DevOps Infrastructure
Building CI & CD Pipeline for E-Commerce Industry
Building CI & CD Pipeline with DevOps
Learning Objectives: In this module, you will understand what Big Data is, the limitations of the traditional solutions for Big Data problems, how Hadoop solves those Big Data problems, Hadoop Ecosystem, Hadoop Architecture, HDFS, Anatomy of File Read and Write & how MapReduce works.
Topics:
Introduction to Big Data & Big Data Challenges Preview
Limitations & Solutions of Big Data Architecture
Hadoop & its Features
Hadoop Ecosystem
Hadoop 2.x Core Components Preview
Hadoop Storage: HDFS (Hadoop Distributed File System)
Hadoop Processing: MapReduce Framework
Different Hadoop Distributions
Preview
Preview
Learning Objectives: In this module, you will learn Hadoop Cluster Architecture, important configuration files of Hadoop Cluster, Data Loading Techniques using Sqoop & Flume, and how to setup Single Node and Multi-Node Hadoop Cluster.
Topics:
Hadoop 2.x Cluster Architecture Preview
Federation and High Availability Architecture Preview
Typical Production Hadoop Cluster
Hadoop Cluster Modes
Common Hadoop Shell Commands Preview
Hadoop 2.x Configuration Files
Single Node Cluster & Multi-Node Cluster set up
Basic Hadoop Administration
Hadoop MapReduce Framework
Preview
Learning Objectives: In this module, you will understand Hadoop MapReduce framework comprehensively, the working of MapReduce on data stored in HDFS. You will also learn the advanced MapReduce concepts like Input Splits, Combiner & Partitioner.
Topics:
Traditional way vs MapReduce way
Why MapReduce Preview
YARN Components
YARN Architecture
YARN MapReduce Application Execution Flow
YARN Workflow
Anatomy of MapReduce Program Preview
Input Splits, Relation between Input Splits and HDFS Blocks
MapReduce: Combiner & Partitioner
Demo of Health Care Dataset
Demo of Weather Dataset
Advanced Hadoop MapReduce
Preview
Learning Objectives: In this module, you will learn Advanced MapReduce concepts such as Counters, Distributed Cache, MRunit, Reduce Join, Custom Input Format, Sequence Input Format and XML parsing.
Topics:
Counters
Distributed Cache
MRunit
Reduce Join Preview
Custom Input Format Preview
Sequence Input Format
XML file Parsing using MapReduce
Understanding Big Data and Hadoop
Preview
Hadoop Architecture and HDFS
Preview
Hadoop MapReduce Framework
Preview
Advanced Hadoop MapReduce
Preview
Apache Pig
Preview
Learning Objectives: In this module, you will learn Apache Pig, types of use cases where we can use Pig, tight coupling between Pig and MapReduce, and Pig Latin scripting, Pig running modes, Pig UDF, Pig Streaming & Testing Pig Scripts. You will also be working on healthcare dataset.
Topics:
Introduction to Apache Pig Preview
MapReduce vs Pig
Pig Components & Pig Execution
Pig Data Types & Data Models in Pig
Pig Latin Programs Preview
Shell and Utility Commands
Pig UDF & Pig Streaming
Testing Pig scripts with Punit
Aviation use-case in PIG
Pig Demo of Healthcare Dataset
view
Apache Hive
Preview
Learning Objectives: This module will help you in understanding Hive concepts, Hive Data types, loading and querying data in Hive, running hive scripts and Hive UDF.
Topics:
Introduction to Apache Hive Preview
Hive vs Pig
Hive Architecture and Components Preview
Hive Metastore
Limitations of Hive
Comparison with Traditional Database
Hive Data Types and Data Models
Hive Partition
Hive Bucketing
Hive Tables (Managed Tables and External Tables)
Importing Data
Querying Data & Managing Outputs
Hive Script & Hive UDF
Retail use case in Hive
Hive Demo on Healthcare Dataset
Get detailed course syllabus in your in
Advanced Apache Hive and HBase
Preview
Learning Objectives: In this module, you will understand advanced Apache Hive concepts such as UDF, Dynamic Partitioning, Hive indexes and views, and optimizations in Hive. You will also acquire indepth knowledge of Apache HBase, HBase Architecture, HBase running modes and its components.
Topics:
Hive QL: Joining Tables, Dynamic Partitioning Preview
Custom MapReduce Scripts
Hive Indexes and views Preview
Hive Query Optimizers
Hive Thrift Server
Hive UDF Preview
Apache HBase: Introduction to NoSQL Databases and HBase Preview
HBase v/s RDBMS
HBase Components
HBase Architecture Preview
HBase Run Modes
HBase Configuration
HBase Cluster Deployment
Advanced Apache HBase
Preview
Learning Objectives: This module will cover advance Apache HBase concepts. We will see demos on HBase Bulk Loading & HBase Filters. You will also learn what Zookeeper is all about, how it helps in monitoring a cluster & why HBase uses Zookeeper.
Topics:
HBase Data Model Preview
HBase Shell
HBase Client API
Hive Data Loading Techniques
Apache Zookeeper Introduction
ZooKeeper Data Model
Zookeeper Service
HBase Bulk Loading Preview
Getting and Inserting Data
HBase Filters
Processing Distributed Data with Apache Spark
Preview
Learning Objectives: In this module, you will learn what is Apache Spark, SparkContext & Spark Ecosystem. You will learn how to work in Resilient Distributed Datasets (RDD) in Apache Spark. You will be running application on Spark Cluster & comparing the performance of MapReduce and Spark.
Topics:
What is Spark Preview
Spark Ecosystem
Spark Components Preview
What is Scala Preview
Why Scala
SparkContext
Spark RDD
Understanding Big Data and Hadoop
Preview
Hadoop Architecture and HDFS
Preview
Hadoop MapReduce Framework
Preview
Advanced Hadoop MapReduce
Preview
Apache Pig
Preview
Apache Hive
Preview
Advanced Apache Hive and HBase
Preview
Advanced Apache HBase
Preview
Processing Distributed Data with Apache Spark
Preview
Oozie and Hadoop Project
Preview
Learning Objectives: In this module, you will understand how multiple Hadoop ecosystem components work together to solve Big Data problems. This module will also cover Flume & Sqoop demo, Apache Oozie Workflow Scheduler for Hadoop Jobs, and Hadoop Talend integration.
Topics:
Oozie Preview
Oozie Components
Oozie Workflow
Scheduling Jobs with Oozie Scheduler
Demo of Oozie Workflow
Oozie Coordinator Preview
Oozie Commands
Oozie Web Console
Oozie for MapReduce
Combining flow of MapReduce Jobs
Hive in Oozie
Hadoop Project Demo
Hadoop Talend Integration
Analyses of a Online Book Store
Find out the frequency of books published each year. (Hint: Sample dataset will be provided)
B. Find out in which year the maximum number of books were published
Find out how many books were published based on ranking in the year 2002.
Sample Dataset Description
The Book-Crossing dataset consists of 3 tables that will be provided to you.
Airlines Analysis
Find list of Airports operating in Country India
Find the list of Airlines having zero stops
List of Airlines operating with codeshare
Which country (or) territory having highest Airports
Find the list of Active Airlines in United state
Sample Dataset Description
In this use case, there are 3 data sets. Final_airlines, routes.dat, airports_mod.dat
Hadoop is an Apache project (i.e. an open-source software) to store & process Big Data. Hadoop stores Big Data in a distributed & fault-tolerant manner over commodity hardware. Afterward, Hadoop tools are used to perform parallel data processing over HDFS (Hadoop Distributed File System).
As organizations have realized the benefits of Big Data Analytics, so there is a huge demand for Big Data & Hadoop professionals. Companies are looking for Big data & Hadoop experts with the knowledge of Hadoop Ecosystem and best practices about HDFS, MapReduce, Spark, HBase, Hive, Pig, Oozie, Sqoop & Flume. You can gain these skills with the Online Big Data Course.
Edureka's Hadoop Training is designed to make you a certified Big Data practitioner by providing you rich hands-on training on Hadoop Ecosystem. This Hadoop Certification is a stepping stone to your Big Data journey and you will get the opportunity to work on various Big Data projects.Edureka's Big Data Course helps you learn all about Hadoop architecture, HDFS, Advanced Hadoop MapReduce framework, Apache Pig, Apache Hive, etc. The primary objective of this Hadoop training is to assist you in comprehending Hadoop's Complex architecture and its elements. This Big Data Certification provides in-depth knowledge on Big Data and Hadoop Ecosystem tools that helps you clear the Hadoop certification exam.
DevOps Engineer
Robert Half
West Palm Beach, FL 33407
$52.25 - $60.50 an hour - Temporary, Contract
DevOps Engineer
Robert Half - West Palm Beach, FL 33407
Job details
No matching job preferences
Salary
$52.25 - $60.50 an hour
Job Type
Temporary
Contract
Benefits
Pulled from the full job description
401(k)
Dental insurance
Disability insurance
Health insurance
Vision insurance
Full Job Description
As a DevOps Engineer, you will be a core, pivotal, and transformational member of the engineering team for our client. This role will be an integral part of our agile teams in analyzing, designing, building, and testing high quality cloud deployment methodologies and systems. You will work on a team under the leadership of senior architects, building the future of our state-of-the-art cloud-based platforms. This position will be fully responsible for our platform and application pipelines, and the flow of code updates through the engineering group, as well as creating and monitoring highly available cloud infrastructure and platforms to host that code. You will have the opportunity to work on a variety of technologies, especially cloud platform services.
Responsibilities
Some responsibilities include cloud architecture, and infrastructure and building tooling and dev-ops systems and processes. Your responsibility will span all environments including production, development, and testing.
AWS, Cloud, DevOps, Ansible, Terraform, Bilingual in English and French
Technology Doesn't Change the World, People Do. ®
Robert Half is the world’s first and largest specialized talent solutions firm that connects highly qualified job seekers to opportunities at great companies. We offer contract, temporary and permanent placement solutions for finance and accounting, technology, marketing and creative, legal, and administrative and customer support roles.
Robert Half puts you in the best position to succeed by advocating on your behalf and promoting you to employers. We provide access to top jobs, competitive compensation and benefits, and free online training. Stay on top of every opportunity – even on the go. Download the Robert Half app and get 1-tap apply, instant notifications for AI-matched jobs, and more.
All applicants applying for U.S. job openings must be legally authorized to work in the United States. Benefits are available to contract/temporary professionals, including medical, vision, dental, and life and disability insurance. Hired contract/temporary professionals are also eligible to enroll in our company 401(k) plan. Visit roberthalf.gobenefits.net for more information.
© 2023 Robert Half. An Equal Opportunity Employer. M/F/Disability/Veterans. By clicking “Apply Now,” you’re agreeing to Robert Half’s Terms of Use .
Developer with Expertise in C++ and Python, Cloud Computing, and Databases
Posted 3 hours ago
Worldwide
Looking for a skilled Full Stack Developer who is proficient in both C++ and Python programming languages, and has experience working with Cloud Computing and Databases. The perfect candidate should have worked with AWS cloud services and be well-versed with Restful API and WebSocket API. Familiarity with Git, and Docker is a must, and experience with DynamoDB/MongoDB databases is preferred.
Additionally, the developer should be able to work in the Indian time zone.
This is a project-based position, but successful completion may lead to a full-time opportunity. Further details regarding the role will be provided via chat.
-
Johns Hopkins Applied Physics Laboratory (APL) - Laurel, MD 20723
Tuition reimbursement
Description
Do you enjoy working on the cutting edge of modern cloud computing capabilities and their applications?
Are you interested in applying your cloud engineering skills to exciting problems in space exploration and security?
If so, we're looking for someone like you to join our team at APL.
We are seeking a DevOps Engineer to join us in the Analysis and Applications group of APL’s Space Exploration Sector. Our group does a myriad of work spanning space mission planning, science data collection and analysis, and space warfighter support for our client organizations at NASA and in the Department of Defense. We are a multidisciplinary science and engineering team addressing challenges in space mission simulation, data collection and engineering, data science, visualization and more. More recently, we have been evaluating and incorporating cloud computing approaches into our data collection and analysis practices, and are intent on continuing in this direction as we modernize to account for larger mission data volumes, big-data analytical capabilities, the latest advances in machine learning and artificial intelligence, and the shift in the scientific community towards Open Science enablement through improved cross organizational collaboration and data sharing.
Our current needs are focused on learning to use cloud computing resources effectively and efficiently for scalable & reusable science data pipelines, as well as for collaborative analysis and data sharing across teams from multiple institutions.
As a DevOps Engineer, your initial projects would involve...
Using Infrastructure-as-Code tool stacks such as AWS CDK and Terraform to automate the creation and operations of science data environments, platforms and tools in AWS.
Building and deploying containerized science applications to AWS using Docker & Kubernetes.
Developing science data pipeline and analysis solutions using Python, and AWS services such as S3, Dynamo and Lambda.
Working closely with scientific users of tools to understand evolving cloud infrastructure and software needs, and opportunities for further growing science capabilities by better use of cloud platform capabilities.
Qualifications
You meet our minimum qualifications for the job if you...
Have a BS degree in Computer Science, Engineering or a related field with 3+ years of experience in the construction and operations of AWS cloud computing environments.
Are proficient with AWS CDK and/or Terraform for building environments, and have built and deployed containerized applications to Kubernetes using these frameworks.
Have experience in Python for both application development and infrastructure automation, in a team environment using tools like GitLab or GitHub,
Have the skill to present complex technical concepts to audiences of varying size and level of experience.
Are willing and able to travel 5% of the time
Are able to obtain a Top Secret security clearance. If selected, you will be subject to governments security clearance investigation and must meet the requirements for access to classified information. Eligibility requirements include U.S. citizenship.
You'll go above and beyond our minimum requirements if you...
Have an MS or PhD in Computer Science, Engineering or a related field.
Hold an AWS Certification: DevOps, Practitioner, Developer, etc.
Have 1+ years experience with or training in any of the following: spacecraft operations, heliophysics, SaaS platform architecture & operations, data engineering.
Be Part of Something Innovative
Over the past 25 years, the Space Exploration Sector at APL, has pushed the boundaries of what is possible; delivering game-changing impacts to sponsors like NASA and the Department of Defense. This includes historic, science space firsts like New Horizons reaching Pluto, Parker Solar Probe being the first to “touch” the sun, and the DART mission that redirected an asteroid for planetary defense. As a not-for-profit university affiliated research center, APL also delivered solutions to our nation’s national security challenges, as proven with the recent Deep Space Advanced Radar Concept (DARC) Tech Demo, and acts as a trusted partner with the US Space Force in space domain awareness and space-integrated warfare.
APL teams are currently developing missions that will advance the search for life in the Solar System through programs such as Europa Clipper and Dragonfly, exploring the lunar and cislunar domains, and providing fundamental knowledge of our Sun’s influence on the near-Earth environment through IMAP and other research and technology endeavors. To learn about these APL’s mission and projects visit https://civspace.jhuapl.edu/ and https://www.jhuapl.edu/OurWork/NationalSecuritySpace.
Why work at APL?
The Johns Hopkins University Applied Physics Laboratory (APL) brings world-class expertise to our nation’s most critical defense, security, space and science challenges. While we are dedicated to solving complex challenges and pioneering new technologies, what makes us truly outstanding is our culture. We offer a vibrant, welcoming atmosphere where you can bring your authentic self to work, continue to grow, and build strong connections with inspiring teammates.
At APL, we celebrate our differences and encourage creativity and bold, new ideas. Our employees enjoy generous benefits, including a robust education assistance program, unparalleled retirement contributions, and a healthy work/life balance. APL’s campus is located in the Baltimore-Washington metro area. Learn more about our career opportunities at http://www.jhuapl.edu/careers.
About Us
APL is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, creed, color, religion, sex, gender identity or expression, sexual orientation, national origin, age, physical or mental disability, genetic information, veteran status, occupation, marital or familial status, political opinion, personal appearance, or any other characteristic protected by applicable law.
APL is committed to promoting an innovative environment that embraces diversity, encourages creativity, and supports inclusion of new ideas. In doing so, we are committed to providing reasonable accommodation to individuals of all abilities, including those with disabilities. If you require a reasonable accommodation to participate in any part of the hiring process, please contact Accommodations@jhuapl.edu. Only by ensuring that everyone’s voice is heard are we empowered to be bold, do great things, and make the world a better place.
Cloud DevOps Engineer
Analog Devices
Boston, MA
Full-time
You must create an Indeed account before continuing to the company website to apply
Cloud DevOps Engineer
Analog Devices - Boston, MA
Job details
No matching job preferences
Job Type
Full-time
Full Job Description
Come join Analog Devices (ADI) – a place where Innovation meets Impact. For more than 55 years, Analog Devices has been inventing new breakthrough technologies that transform lives. At ADI you will work alongside the brightest minds to collaborate on solving complex problems that matter from autonomous vehicles, drones and factories to augmented reality and remote healthcare.
ADI fosters a culture that focuses on employees through beneficial programs, aligned goals, continuous learning opportunities, and practices that create a more sustainable future.
ADI At A Glance
Analog Devices, Inc. (NASDAQ: ADI) is a global semiconductor leader that bridges the physical and digital worlds to enable breakthroughs at the Intelligent Edge. ADI combines analog, digital, and software technologies into solutions that help drive advancements in digitized factories, mobility, and digital healthcare, combat climate change, and reliably connect humans and the world. With revenue of more than $12 billion in FY22 and approximately 25,000 people globally working alongside 125,000 global customers, ADI ensures today’s innovators stay Ahead of What’s Possible.
Analog Devices is uniquely positioned for success at the boundary of the physical and digital worlds. Analog Devices transforms physical phenomena – sound, light, radio waves, voltages, currents, and motion – into high-fidelity data. Our mission is to build the Intelligent Edge, where AI transforms how we solve challenging problems by combining deep application knowledge, close customer relationships, extraordinary data, advanced circuits, and breakthrough algorithms.
Analog Devices has established the AI Solutions Business Unit to deliver AI-enabled products to our vast markets. The AI Solutions BU develops products at multiple technology stack layers, from AI-enabled software applications to deeply embedded AI algorithms. The AI Solutions BU collaborates with our Market BUs to solve problems beyond the reach of pure semiconductor, circuit-level, or architectural innovation.
We are seeking a DevOps Engineer to join our team and help design, develop, and maintain our cloud infrastructure for AI/ML applications. The ideal candidate will have a DevOps and Software Engineering background, knowledge of Cloud technologies (GCP preferred), and familiarity with infrastructure-as-code software tools (e.g., Terraform).
Responsibilities:
Design, develop, and maintain the Cloud infrastructure for AI/ML applications using infrastructure-as-code software tools (e.g., Terraform).
Setup and manage CI/CD pipelines and other developer tools to increase engineering throughput
Ensure our Cloud infrastructure's security, availability, and reliability via monitoring, observability, and alerting
Work closely with our Software Engineers to optimize performance and scalability and deliver functional customer-facing products
Create infrastructure code and reference architectures for orchestrating environment creation, housekeeping, and deployment to make vertical application development faster
Stay up to date with the latest developments in cloud technologies and DevOps tools
Requirements:
At least 2 years of experience in a Cloud DevOps or Software engineering role
Hands-on experience with one major cloud provider like GCP (preferred), AWS, or Azure
Familiarity in containerization and orchestration using technologies such as Kubernetes
Experience with CI/CD tools like GitLab CI, Cloud Build, or Jenkins.
Familiarity with Infrastructure as Code concepts
Experience with Agile/Scrum development processes and tools (e.g., Jira, Confluence, etc.)
Strong communication and collaboration skills
A Bachelor's degree in Computer Science or a related field
For positions requiring access to technical data, Analog Devices, Inc. may have to obtain export licensing approval from the U.S. Department of Commerce - Bureau of Industry and Security and/or the U.S. Department of State - Directorate of Defense Trade Controls. As such, applicants for this position – except US Citizens, US Permanent Residents, and protected individuals as defined by 8 U.S.C. 1324b(a)(3) – may have to go through an export licensing review process.
Analog Devices is an equal opportunity employer. We foster a culture where everyone has an opportunity to succeed regardless of their race, color, religion, age, ancestry, national origin, social or ethnic origin, sex, sexual orientation, gender, gender identity, gender expression, marital status, pregnancy, parental status, disability, medical condition, genetic information, military or veteran status, union membership, and political affiliation, or any other legally protected group.
EEO is the Law:
Notice of Applicant Rights Under the Law
.
Job Req Type: Graduate Job
Required Travel: No
Broadcom
Broomfield, CO
$63,000 - $105,000 a year - Full-time
You must create an Indeed account before continuing to the company website to apply
No matching job preferences
Salary
$63,000 - $105,000 a year
Job Type
Full-time
Shift and Schedule
On call
Encouraged to Apply
Fair chance
401(k)
401(k) matching
Dental insurance
Employee assistance program
Employee stock purchase plan
Family leave
Health insurance
Paid sick time
Paid time off
Vision insurance
Please Note:
1. If you are a first time user, please create your candidate login account before you apply for a job.
2. If you already have a Candidate Account, please Sign-In before you apply.
Job Description:
Broadcom is seeking an experienced, motivated, curious DevOps Engineer to join our Rally Datastores DevOps group.
Our team emphasizes collaborating with our customers, the engineering teams, enabling them to deliver value at their own pace by providing a dependable platform with integrated solutions for deployment and telemetry.
Additionally, the engineering teams collaborate with us, ensuring we're delivering the features and improvements they need to continuously deliver value.
Why would you want to work here?
Why would I love to build Rally Software at Broadcom?
Influence - We are a small team. You are empowered to help us succeed.
Location - Broomfield, Colorado - 300+ days of sun, mountains, skiing, and biking.
Pace - Sustainable and flexible work/life balance.
Career - Support for attending conferences, writing books, blogging, and speaking.
Camaraderie - We take walks, have quarterly celebrations, and play games to give our brains a break.
Collaboration - Our team works closely together using pairing and Kanban; enabling us to learn from everyone's experience.
Here are some reasons you want to be a Rally Engineer:
Growth – We deliver software using a lot of different, leading-edge technologies that are modernizing a legacy approach to software.
Diversity – We value diversity of thought, background, and experience on our teams. We believe it makes us, our software, and our experiences better.
Career Growth– We work in a plethora of code areas, with dynamic tools, and in an ever-changing environment. We encourage team rotations to explore other areas of code and focus.
Customers – We are doing work that matters to our customers.
Collaboration – We tackle problems together in an agile environment.
Sustainability – Both in how we work and how we live.
What would your responsibilities be?
As a member of the Rally Datastores DevOps team, you will be responsible for:
Collaborating with a multi-functional team to define work that will meet organizational objectives;
Experimenting with new technologies that improve our systems performance, uptime and ability to scale
Maintaining and tuning our Datastore systems to ensure system availability and performance
Build and enhance our automation to reduce manual effort
Participate in the on call rotation to ensure 24x7x365 system availability
What qualifications do you need?
To qualify for this position, you have:
BA/BS degree with 2+ years of experience.
Ability to follow existing processes and procedures to manage and update our infrastructure
Basic scripting experience (Bash, Python, Perl)
Experience in Linux administration and troubleshooting
Understanding of configuration management tooling such as Ansible
To be successful, you will come to this role with solid DevOps skills and a collaborative mindset:
A track record of being curious and searching for answers on their own
Desire to work in an Agile environment
Desire to work with development teams to solve technical challenges
Familiarity with a query language (SQL, Lucene)
To really stand out as the perfect person for this role, you may also have:
Experience running transactional datastores
Experience with CI/CD pipeline such as Jenkins
Experience with Google Cloud Platform
Knowledge of relational and non-relational datastores (PostgreSQL, MySQL, Mongo, Elasticsearch)
Broadcom Software is one of the world’s leading enterprise software companies, modernizing, optimizing, and protecting the world’s most complex technology environments. With its engineering-centered culture, Broadcom Software is a global software leader building a comprehensive portfolio of industry-leading enterprise software enabling innovation, stability, scalability, and security for the largest global companies in the world.
In the Enterprise Software Division, we build software to support companies in making intelligent, data-driven decisions to achieve better business outcomes. Our industry success depends on a decades-long track record of delivering transformational solutions to teams who plan, build, test, and operate mission-critical software for the world’s largest and most complex businesses. To do this, we respond quickly and thoughtfully, innovate in the context of customer needs, and collaborate inclusively with customers and internal partners. Our business will nurture your intellect and give you opportunities to expand your skills even further.
Additional Job Description:
Compensation and Benefits
The annual base salary range for this position is $63,000 - $105,000.
This position is also eligible for a discretionary annual bonus in accordance with relevant plan documents, and equity in accordance with equity plan documents and equity award agreements.
Broadcom offers a competitive and comprehensive benefits package: Medical, dental and vision plans, 401(K) participation including company matching, Employee Stock Purchase Program (ESPP), Employee Assistance Program (EAP), company paid holidays, paid sick leave and vacation time. The company follows all applicable laws for Paid Family Leave and other leaves of absence.
Broadcom is proud to be an equal opportunity employer. We will consider qualified applicants without regard to race, color, creed, religion, sex, sexual orientation, gender identity, national origin, citizenship, disability status, medical condition, pregnancy, protected veteran status or any other characteristic protected by federal, state, or local law. We will also consider qualified applicants with arrest and conviction records consistent with local law.
If you are located outside USA, please be sure to fill out a home address as this will be used for future correspondence.
Posted 1 day ago
Broadcom
Development Operations Engineer jobs in Broomfield, CO
Jobs at Broadcom in Broomfield, CO
Development Operations Engineer salaries in Broomfield, CO
Get job updates from Broadcom
A Company of Innovation and Engineering A Community of Talent As a leading provider of highly integrated semiconductor technology solu...
Let employers find you
Thousands of employers search for candidates on Indeed
Wealthfront
Boston, MA
Remote
$180,000 - $210,000 a year - Full-time
You must create an Indeed account before continuing to the company website to apply
No matching job preferences
Salary
$180,000 - $210,000 a year
Job Type
Full-time
Shift and Schedule
On call
401(k)
Dental insurance
Health insurance
Paid time off
Parental leave
Vision insurance
We’re looking for a Staff DevOps Engineer to join our Infrastructure team, where you will build and maintain the infrastructure and services that run in our data centers and support our customer-facing products.
Wealthfront’s Infrastructure team is a team of “Generalists with Expertise” who adhere to the principles of “infrastructure-as-code” and bring strategic expertise for designing, implementing, maintaining, and improving our infrastructure.
We're a modern infrastructure engineering team, that believes strongly in automation and standardization. We operate in both physical data centers and cloud environments and leverage open-source software such as MariaDB, NGINX, Jenkins, Chef, Kibana, and Grafana to deliver automated infrastructure to our clients and every member of the Engineering team.
Maintain our core infrastructure by automating software deployment, infrastructure configuration, and database cluster management and tuning
Ensure that mission-critical services operate reliably by triaging and fixing operational issues as an on-call engineer, participating in post-mortems, and implementing improvements to prevent future issues.
Design, implement, and deploy internal tools and services to accelerate the productivity of the wider Engineering team and enable direct ownership of operations.
Help manage our server hardware in our physical data centers and occasionally travel to our Bay Area or New Jersey data centers for onsite projects.
Be involved in key decisions regarding the evolution of our infrastructure
Mentor junior members of the team
At least 8+ years of experience running and troubleshooting production-critical applications and services
Designed infrastructure for CI/CD pipelines (e.g. Jenkins), dependency management (e.g. Nexus), and complex orchestration workflows for Java-based production services
Software development experience in Java or a similar language
Demonstrated experience with modern Linux systems
Proficiency with at least one automation technology such as Chef, Puppet, or Ansible
Knowledge of SQL and experience working with online databases such as MariaDB/MySQL
Excellent communications and project leadership skills and a desire to both learn from and educate your peers
A BS or MS in Engineering or Computer Science or a related field, or equivalent professional experience
Pair you with a 1:1 mentor to guide you through our onboarding program.
Encourage you to lead impactful projects which match your professional goals.
Support your professional development by providing feedback during weekly 1:1s and during our bi-annual reviews.
Give you autonomy so you can be a happy and successful member of our team.
Estimated annual salary range: $180,000 - $210,000 USD plus Equity.
Plus benefits include medical, vision, dental, 401K plan, generous time off, parental leave, wellness reimbursements, professional development, employee investing discount, and more!
Everyone across the financial spectrum deserves to live secure and rewarding lives. In order to successfully serve clients across the United States, the Wealthfront team is focused on hiring team members with a diverse range of backgrounds, experiences and perspectives. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
About Wealthfront
Wealthfront started with the ambition to transform the investment advisory business, with the goal to unlock access to high quality investment advice for millions who were underserved by the traditional institutions. We built the first automated investment product that allows you to invest in a personalized portfolio of thousands of companies in seconds for a remarkably low fee; we then expanded into banking which made it remarkably easy for people to automate their finances end-to-end and eliminated the hassle of money management, all of which resulted in attracting more than $27 billion of our client’s hard earned savings, created the robo-advisor category and transformed the broader industry. And yet, we have a long way to go to achieve our mission to build a financial system that favors people, not institutions.
Wealthfront’s vision is to make it delightfully easy to build long-term wealth on your own terms. This vision is more relevant than ever because millions more people are getting into the market early and investing their hard earned savings in a handful of stocks. While this is a great way to start, it is inconsistent with building long-term wealth. We want to empower young investors to expand their horizon, and easily explore and execute on a wider range of investing strategies, make informed investment decisions that are consistent with their values and beliefs while also making it effortless to grow and compound their savings exponentially, that’s transformational to their lives and their long-term future.
Please review our candidate privacy notice
Posted Today
Wealthfront
Development Operations Engineer jobs in Boston, MA
This job has expired on Indeed
Reasons could include: the employer is not accepting applications, is not actively hiring, or is reviewing applications
AWS DevOps Engineer
In
ner Balance Technology Services
Washington, DC
Full-time
Job details
No matching job preferences
Job Type
Full-time
Qualifications
Kubernetes: 3 years (Required)
DevOps: 3 years (Required)
US Citizen. Able to obtain a US Government Trust Clearance (Required)
Benefits
Pulled from the full job description
Dental insurance
Health insurance
Vision insurance
Full Job Description
JOB DESCRIPTION
*** PLEASE NO H1B or GREEN CARD HOLDERS ***
**US CITIZEN ONLY**
**DIRECT GOV CLIENT**
**100% Remote supporting a client in EST**
We are seeking an experienced DevOpS Engineer experienced working in AWS. The ideal candidate will have a passion for creating and managing infrastructure as code and automating deployments using AWS tools and services. The DevOps Engineer will be responsible for designing, implementing, and maintaining our cloud infrastructure in a highly available, scalable, and fault-tolerant manner.
This individual must be a US CITIZEN and should be able to obtain a government trust clearance.
To be successful in this role, you should have excellent problem-solving skills and be a passionate self starter working in a techincal environment.
REQUIREMENTS:
MUST BE US CITIZEN
Technical Skills
3+ years of experience with Kubernetes and Docker, preferably working experience with AWS EKS and AWS ECR
3+ years experience in Linux system adminstration
2+ years of experience in AWS cloud infrastructure management and deployment
2+ years of experience in deployment/configuration of AWS services such as EC2, S3, RDS, Lambda, CloudFormation, and others
2+ years of experience in implemententing a CI/CD process for Application Development, Data Integration, and/or Business Intelligence Development
2+ years of experience with automation tools such as AWS CloudFormation, Terraform, Ansible, and/or Chef
2+ years of experience with CI/CD tool stack, preferably with AWS CodeCommit, CodeBuild, and CodePipeline
2+ years of experience
Proficient in scripting languages such as Python or Bash and SQL
Proficient in Database adminstration in Oracle, Sql Server, Redshift, etc
Excellent troubleshooting and problem-solving skills
Strong communication and collaboration skills
Client Engagement Experience
Experience working directly with client to gather/build requirement, analyzing data, designing technical solutions to address clients needs
Conducts unit tests, code reviews, assesses and improves site/software performance, and maintains design and code documentation
Experience working on a Scrum Agile Development Team
Self starter and problem solver
Job Type: Full-time
Pay: From $130,000.00 per year
Benefits:
Dental insurance
Health insurance
Vision insurance
Experience:
Kubernetes: 3 years (Required)
DevOps: 3 years (Required)
License/Certification:
US Citizen. Able to obtain a US Government Trust Clearance (Required)
Work Location: Remote
Health insurance-----------
HCA Healthcare
Brentwood, TN 37027
HCA Healthcare - Brentwood, TN 37027
No matching job preferences
Shift and Schedule
On call
401(k)
401(k) matching
Adoption assistance
Disability insurance
Employee stock purchase plan
Family leave
Flexible spending account
Health insurance
Paid time off
Pet insurance
Relocation assistance
Tuition reimbursement
Vision insurance
Introduction
Are you passionate about the patient experience? At HCA Healthcare, we are committed to caring for patients with purpose and integrity. We care like family! Jump-start your career as a DevOps Engineer today with HCA Healthcare.
Benefits
HCA Healthcare, offers a total rewards package that supports the health, life, career and retirement of our colleagues. The available plans and programs include:
Comprehensive medical coverage that covers many common services at no cost or for a low copay. Plans include prescription drug and behavioral health coverage as well as free telemedicine services and free AirMed medical transportation.
Additional options for dental and vision benefits, life and disability coverage, flexible spending accounts, supplemental health protection plans (accident, critical illness, hospital indemnity), auto and home insurance, identity theft protection, legal counseling, long-term care coverage, moving assistance, pet insurance and more.
Free counseling services and resources for emotional, physical and financial wellbeing
401(k) Plan with a 100% match on 3% to 9% of pay (based on years of service)
Employee Stock Purchase Plan with 10% off HCA Healthcare stock
Family support through fertility and family building benefits with Progyny and adoption assistance.
Referral services for child, elder and pet care, home and auto repair, event planning and more
Consumer discounts through Abenity and Consumer Discounts
Retirement readiness, rollover assistance services and preferred banking partnerships
Education assistance (tuition, student loan, certification support, dependent scholarships)
Colleague recognition program
Time Away From Work Program (paid time off, paid family leave, long- and short-term disability coverage and leaves of absence)
Employee Health Assistance Fund that offers free employee-only coverage to full-time and part-time colleagues based on income.
Learn more about Employee Benefits
Note: Eligibility for benefits may vary by location.
Come join our team as a DevOps Engineer. We care for our community! Just last year, HCA Healthcare and our colleagues donated $13.8 million dollars to charitable organizations. Apply Today!
Job Summary and Qualifications
HCA Healthcare ITG
Job Summary:
Provide technical skills that cover a broad range of disciplines and/or a particular technical discipline that is of significance to HCA. Perform technical project assigned with minimal supervision. Identify, investigate, evaluate and become proficient in new technologies and technical disciplines that are of significance to HCA Healthcare.
Provide end to end support of mission critical applications and infrastructure. Build, test, and maintain infrastructure and application monitoring, alerting, and automated remediation where possible. Provide full stack trouble shooting to n-tier application stacks on Linux and Windows environments.
Our Purpose
Your skills will help transform healthcare through technology and solutions that dramatically improve patient care and business operations. Everything we do at HCA ITG ultimately influences patient care and the patient experience.
Core Competencies: The following are highlighted competencies and core expectations for the job/role:
Enterprise Perspectives – Understanding of large enterprise IT infrastructure dependencies
Systems Engineering Planning and Management- risk management, configuration management, and continuous process improvement
Collaboration - building trust, communicating with impact, adaptability, and result oriented
Systems Engineering Life Cycle – concept definition, requirements, architecture design, integration, implementation, and operational maintenance
Innovative Approaches – Design and implement creative solutions to meet challenging requirements
At HCA IT&S, your deliverables will influence patient care. Every process, technology and decision matters.
General Responsibilities:
Requires knowledge of supported operating systems, utilities, vendor products, applicable programming languages, diagnostic techniques, database management systems, benchmarking methodologies, applicable communications protocols, applicable hardware configurations, use of statistical and analytical tools for system monitoring and evaluation.
In depth knowledge of technology, network, data and applications architectures.
Ability to communicate with various levels and types of end users is also required.
Maintain frequent interaction with internal and vendor technical staffs and project managers to ensure effective delivery of solutions in accordance with project timelines and associated budgets.
Provide technical leadership and responsibility for the installation and certification of all supporting software and hardware in a lab environment in order to assure a sound deployment basis for production implementations.
Education, Experience and Certifications:
1-3 years of experience – Required
Bachelor’s Degree – Preferred
Certification in one or more of the following: MCP, MCSE, MCSA, VCP, RH, ITIL, CSA, RHCE, Linux+ - Preferred
Experience with Java, tomcat, Linux, windows server, MS SQL Server, Azure, ansible, Jenkins, docker, PowerShell, bash, VMware, vSphere, Cisco vBlock, Dynatrace, Splunk, virtual instruments, GIT
Capable of working effectively in a diverse, virtual team-oriented environment.
Must be a self-starter and work effectively with minimal supervision.
Must be customer-oriented and recognize the importance of customer service and meeting service level commitments.
Must be committed to quality and timeliness of assigned deliverables and milestones.
Other Required Qualifications:
Professional experience in Linux and Windows environments
Specific platform knowledge in one or more key areas including Windows 2012 & 2016, Redhat 7.x and 8.x, and Apache Tomcat
Knowledge of SAN-attached devices and tools such as NetApp, PowerPath, VIPR, and HBAAnywhere very desirable.
Knowledge of virtual technology, such as, VMWare ESX, Hyper-V, hyperconverged platforms, and private cloud technologies is highly desirable.
Experience with OS environment running one or more databases including Microsoft SQL 2014 -2016
Ability to work with DBAs and understand their requirements is necessary.
Proven track record of project delivery results.
Demonstrated written and verbal communication skills and the ability to work collaboratively are essential.
Solid understanding and experience with operating system concepts and practical implementation of those fundamentals in a large-scale production environment.
Working knowledge and experience with key ITIL processes is required.
Experience with application performance monitoring, operational analytics, and configuration management tools such as Splunk, Aternity, and BMC Atrium is a plus.
Experience with automation software such as Ansible, Jenkins, and Docker is highly desirable.
Previous experience with application development coding, debugging, testing and delivery into a production environment is highly desirable.
A solid understanding of compute performance metrics and tools is highly desirable, to include understanding of CPU utilization levels, CPU and disk queuing, I/O response times, and other key performance indicators and their impact on performance.
Experience with network fundamentals and performance principles, including a firm understanding of TCPIP protocols.
Physical Demands/Working Conditions
Some after-hours work will be required occasionally when not on-call.
Possible prolonged sitting at workstation.
On-Call Rotation required for 24x7 support
Ideal Candidate
Strong analytical and design ability, high degree of creativity, strong troubleshooting and innovation ability
In depth knowledge of the business implications of technical approaches, directions and solutions
Ability to innovate in complex technical areas, ability to achieve results, advanced project planning and coordination skills, and superior communication skills
HCA Healthcare’s Information Technology Group (ITG) delivers healthcare IT products and services to HCA Healthcare's portfolio of business and partners, including Parallon, HealthTrust and Sarah Cannon.
For decades, ITG has been a pioneer in the industry, leading the transformation of healthcare into a new era of quality and connectivity. ITG relies on the breadth of the organization and depth of technical expertise to advance and enhance today’s healthcare and to enable our physicians and clinicians to provide world-class, innovative care for patients.
ITG employees rally around the noble cause of transforming healthcare through technology and find inspiration in the meaningful work they do—creating a culture that follows our mission statement which begins by saying “above all else we are committed to the care and improvement of human life.”
If you want a career in technology and have a heart for healthcare, apply your expertise to a mission that matters.
HCA Healthcare has been recognized as one of the World’s Most Ethical Companies® by the Ethisphere Institute more than ten times. In recent years, HCA Healthcare spent an estimated $3.7 billion in cost for the delivery of charitable care, uninsured discounts, and other uncompensated expenses.
"The great hospitals will always put the patient and the patient's family first, and the really great institutions will provide care with warmth, compassion, and dignity for the individual."- Dr. Thomas Frist, Sr.
HCA Healthcare Co-Founder
If you are looking for an opportunity that provides satisfaction and personal growth, we encourage you to apply for our DevOps Engineer opening. We promptly review all applications. Highly qualified candidates will be contacted for interviews. Unlock the possibilities and apply today!
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Cloud Devops Engineer, Apple Pay
Apple
Cupertino, CA
Full-time
You must create an Indeed account before continuing to the company website to apply
Cloud Devops Engineer, Apple Pay
Apple - Cupertino, CA
Job details
No matching job preferences
Job Type
Full-time
Benefits
Pulled from the full job description
Dental insurance
Employee stock purchase plan
Health insurance
RSU
Retirement plan
Full Job Description
Summary
Posted: Mar 31, 2023
Weekly Hours: 40
Role Number: 200463930
Imagine what you could do here. At Apple, new ideas have a way of becoming phenomenal products, services, and memorable customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. Apple Pay is transforming the industry in payments, transit and identity, and we’re passionately focusing on the customer’s digital wallet experience. Our scale and security demands create unique opportunities for innovative and creative solutions which contribute to millions of customer’s daily interactions.
Key Qualifications
6+ years of experience with Linux, UNIX, CoreOS, etc.
Puppet, Ansible, Salt, Chef… Been there and done that. You know where each are strong and weak.
Docker, Nomad, Kubernetes, Mesos, Swarm… Yup, you’ve orchestrated it all.
The operating system is just one part of the ecosystem. You’re well versed in the rest (networking, storage, DBs, etc).
Prometheus, Nagios, Sensu… observability is imperative to success.
You know how to set direction and build consensus through vigorous debate.
Ambiguity doesn’t scare you. You see it as an opportunity to define the future.
You automate things rather than doing them twice.
You’re a hands-on strategist.
You can lead a project yourself.
You’re an expert in the DevOps space, but don’t want to be limited to that space.
Experience with HashiCorp stack - Nomad, Consul, Vault is a plus.
Description
The Apple Pay team is looking for Cloud DevOps Engineers to support our rapid growth and expansion. This new role will be key part of the Infrastructure Engineering team at Apple Pay with a broad impact across the product and customer experiences: • Gather requirements, build infrastructure and tooling to support Apple Pay product initiatives • Be a conduit for technical expertise in liaising with external Apple Pay partners • Provide technical guidance, troubleshooting expertise, and architectural insight to development, quality, and site reliability teams • Solve complex problems using both open-source and in-house tooling to support security and business initiatives • Build applications and tools to reduce barriers, decrease friction and speed up delivery of products • Scale existing technologies (or promote new technologies) to out-pace current growth projections • Evangelize next generation cloud and DevOps products and processes, experiment with their implementation and bring them to fruition.
Education & Experience
Additional Requirements
Pay & Benefits
At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $130,000 and $242,000, and your base pay will depend on your skills, qualifications, experience, and location.
Apple employees also have the opportunity to become an Apple shareholder through participation in Apple’s discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple’s Employee Stock Purchase Plan. You’ll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses — including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits.
Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.
--Sr. DevOps Engineer
Applied Systems Canada
United States
Remote
Full-time
Job details
No matching job preferences
Job Type
Full-time
Benefits
Pulled from the full job description
401(k)
Health insurance
Opportunities for advancement
Paid parental leave
Paid time off
Parental leave
Vision insurance
Wellness program
Full Job Description
Job Overview:
Applied Systems, Inc., a worldwide leader in insurance technology, is currently searching for a Senior DevOps Engineer to join our team. The DevOps team delivers foundational components and has built a cross-application deployment framework for Applied’s portfolio of commercial software products. In this role, you will work in a collaborative Agile environment to develop, maintain, and support applications to build software, execute tests, and deploy releases for a variety of Applied client-facing products. You will be relied upon to investigate problems, research various solutions, and devise a work plan to solve those problems.
What You’ll Do
Documenting standardized processes for system deployment and management
Work closely with Software Development, Operations, InfoSec, and Architecture teams to provide fully automated deployment routines for production
Monitoring system activity and tuning system parameters for optimal performance, configuring communications with other platforms/networks, configuring/managing system security, and maintaining current release levels and patch revision
Work across functional (development, testing, deployment, systems/ infrastructure) and project teams to ensure continuous operation of all environments
Manage and maintain tools to automate operational processes
Work to continuously improve the speed, efficiency, and scalability of our systems and environments
Work directly with Agile Development teams to provide support aligned with a model of CI/CD
Build and maintain appropriate log gathering, system monitoring, and reporting infrastructures
What You’ll Need to Succeed
We’re looking for someone who:
Can work remotely or from an Applied Systems office
Your experience should include some or all of the following:
5+ years of DevOps or development experience for Linux or Windows distributed systems delivery using a large-scale private cloud environment
Experience automating CI/CD processes
Experience with Infrastructure as Code, especially Terraform
Experience with Kubernetes and Helm for container orchestration
Demonstrable proficiency in writing automation scripts for the public cloud
Bachelor’s degree in Computer Science or related field or equivalent work experience
We proudly support and encourage people with military experience, as well as military spouses, to apply
Additionally, you may have:
Open source or commercial tooling CI/CD experience, GitLab experience is a plus
Public Cloud experience with a major provider, Google Cloud Platform experience is a plus
CI/CD processes with GitHub, GitLab, Team Foundation Services, or similar
Experience with monitoring and visibility tools. Datadog experience is a plus.
Database and data storage understanding
What You’ll Gain
Benefits from Day One
Health insurance plans, dental, and vision
Wellness incentives
401(k) and/or RRSP retirement savings plans with employer match
Work-Life Balance
Competitive paid vacation time and a free day for your birthday
Personal/sick time
Paid holidays
Flex Time
Paid parental leave (U.S. candidates)
Volunteer time off
Empowering Career Growth and Success – We invest in talent, care about our people and are empowered by the results of our work. We grow our teams from within and give our employees opportunities to advance.
What We Value
We strive for excellence at every turn to be the best at what we do. We invest in talent, care about our people and are empowered by the results of our work. We fulfil the promise of insurance – safeguarding and protecting what matters most in people’s lives. And there is no more important job than that.
Our focus on the workforce, workplace and marketplace gives us a qualified individual in an environment in which they can be productive while we maintain our position in the industry. To help drive that change toward a vibrant, modern workplace, we have employee-driven networks with commonalities in ethnicity, gender, sexual orientation and military status.
Who We Are
For more than 35 years, Applied Systems has created innovative technology for the global insurance industry. Today, we are a rapidly growing software leader that is revolutionizing the way agencies and brokerages succeed.
We are smart and curious people in a tech-first environment that champions bold and powerful thinking. We are transforming a complex industry through digitization, automation, and innovative new partnerships. Together we are driving the industry fearlessly forward.
AWS DevOps Technical Lead
Luxoft
Plano, TX
You must create an Indeed account before continuing to the company website to apply
AWS DevOps Technical Lead
Luxoft - Plano, TX
Benefits
Pulled from the full job description
Relocation assistance
Project Description
Luxoft Automotive is supporting its customers to create strong solutions with up-to-date technologies and matching their exact demand, based on a valuable, professional and trustful relationship. In your role as AWS DevOps Technical Lead in the area of develop platforms for big data and data science on AWS.
Responsibilities
Monitoring and ensure the integrity of data pipelines.
Administering the deployment, management, and monitoring of applications deployed on AWS via CI/CD and/or containers.
Ensuring the compliance of the data science operations on AWS.
Monitoring usage, cost, and implement optimizations of a variety of AWS resources.
Maintaining the Jenkins pipeline and Perform code promotions through change management
Provision DynamoDB tables with encryption and grant access using the IAM policies
Deploy and manage AWS Serverless application running on API Gateway and LAMBDA
Deploy Redshift Clusters into VPC with encryption, enable cross region snapshots, configure subnet groups and setup monitoring, and resize the cluster using elastic and classic methods
Skills
Must have
Required Skills:
AWS (experience mandatory): S3, IAM, EC2, Route53, SNS, SQS, ELB, CloudWatch, Lambda and VPC
Automation (experience mandatory): Terraform.
DevOPS(Mandatory): Jenkins, Bitbucket, Python/Shell scripting
Experience:
Extensive experience in designing, configuring, deploying, managing and automating AWS Core Services like S3, IAM, EC2, Route53, SNS, SQS, ELB, CloudWatch, Lambda and VPC.
Experience in automating cloud deployments using Terraform.
Experience in DevOPS.
Nice to have
Bigdata (knowledge): Redshift, DynamoDB, Databricks, Glue, MLops and Athena.
Knowledge on AWS data platform services like Redshift, DynamoDB, Databricks, Glue, MLops and Athena.
Languages
English: C2 Proficient
Seniority
Lead
Relocation package
If needed, we can help you with relocation process.
Vacancy Specialization
BigData Development
Ref Number
VR-96943
DevOps Engineer- US
Spiff
Utah
Remote
$120,000 - $170,000 a year - Full-time
Job details
No matching job preferences
Salary
$120,000 - $170,000 a year
Job Type
Full-time
Benefits
Pulled from the full job description
401(k)
Dental insurance
Disability insurance
Flexible schedule
Health insurance
Health savings account
Parental leave
Vision insurance
Full Job Description
Spiff (https://spiff.com), recently named one of the most innovative Fintech companies, is on a mission to inspire, enable, and reward peak business performance. Why? Commission plans are used by modern companies to reward and drive good behavior using more advanced rules or combinations of rules such as quota attainment, accelerators, and other types of variable earnings. Great Commission plans motivate Salespeople to sell more to the right companies. In order to help companies and reps to reach their full potential, we take the manual labor and complexity of current commission processes and completely automate them. Finance teams used to spend hours each month trying to prepare commissions, communicate them to their reps, deal with discrepancies, and then get those paid on time. Spiff automates that full process. We connect to the client's systems; CRM, ERP & Payroll to reduce the work and amount of errors. Spiff gives powerful, real-time data and insights to reps, managers, and executives about their commissions.
Who we are looking for...
We’re hiring a DevOps Engineer to join our team to continue to help build and deploy first-class, secure, automated infrastructure at scale. We are looking for a highly-technical, self-motivated engineer with experience in infrastructure as code using Terraform and cloud computing environments. Your responsibilities will include maintaining and improving automated infrastructure environments on Kubernetes using IaC best-practices, assisting with framework compliance (SOC, ISO, etc), and ensuring engineering teams have infrastructure tooling to be effective.
What experience you’ll bring to Spiff…
Minimum of 3 years of experience in DevOps.
Experience in the following:
Google Cloud Platform (GCP)
Kubernetes, GKE
Infrastructure as Code (IaC)
Terraform (Terragrunt is a bonus)
Helm
Service-oriented architecture
Application-security best practices
Docker and Containerization
CI/CD pipelines (Github actions is a bonus)
Bash and Ruby scripting
Linux based environments
Bonus Points
Experience with Ruby, Elixir, and/or React
Interest in Startups/Tech/Finance. Our team loves the startup community, and a genuine interest in the space is huge.
Compensation
At Spiff Inc. we are committed to equal pay and opportunities. In order to provide full transparency, the salary range for this position is USD 120,000 - 170,000 per annum for all candidates based in the US. This role is eligible for equity.
If you are located outside of the US and would like to have visibility on the salary range valid in your location, it can be disclosed by your recruiter upon request.
Spiff Inc. will consider internal equity, external market information, and each candidate's prior experience, education, location, skills, and aptitudes for the role they are applying for.
What types of perks and benefits we offer…
Competitive Salary and Equity
Comprehensive medical, dental, and vision coverage for you and your dependents
Up to $1,200 a year towards your Health Savings Account
401(k)
Company sponsored Short Term and Long Term Disability Insurance
Company-sponsored access to online counseling
Flexible Time Off
Flexible work hours
Parental leave
HQ in Salt Lake City ( enjoy biking and skiing when you come to visit! )
Remote Friendly Company
DevOps Engineer
Sublime Wireless Inc
Remote
Contract
Job details
No matching job preferences
Job Type
Contract
Qualifications
Kubernetes: 4 years (Preferred)
Docker: 3 years (Preferred)
Full Job Description
Note : Only people who can work on W2 can apply.
Kubernetes Deployments, upgrades, backups. For self-managed distributions like Microk8s, k3s and other.
Experience with public cloud distributions of leading hyper scalers is an advantage – EKS, AKS, GKE.
Developing CI and CD pipelines on k8s.
Docker and compatible frameworks – image development and builds, runtime operation
Prometheus/Grafana and similar monitoring tools.
Experience with Linux administration.
Experience with REST APIs.
Sr. DevOps Engineer
MTBC
Miami, FL
Remote
$120,000 - $140,000 a year - Full-time
Job details
No matching job preferences
Salary
$120,000 - $140,000 a year
Job Type
Full-time
Full Job Description
Job Summary: A proactive person with years of experience in AWS, specifically EKS and containerized applications, to provide daily support to different internal teams on cloud infrastructure using a variety of AWS of services. Has a desire to solve challenging problems. This person will participate in a migration project to EKS, using IaC tools such as Terraform and CI/CD, to create and maintain applications and services with automated build and deployment pipelines.
Essential Duties and Responsibilities:
Work independently and collaboratively with a blended team of onshore and offshore DevOps Engineers and Software Engineers to ensure code releases go smoothly, analyzing data for improvements and optimization
Strive for continuous improvement and build continuous integration, continuous delivery pipelines (CI/CD Pipeline)
Learn and master new and emerging technologies and take initiative to offer technical direction and creative solutions for deployment and infrastructure problems
Bring out-of-the box ideas to improve system performance and stability and collaborate on infrastructure design decisions.
Automate deployment, management, and health monitoring of infrastructure and applications
Required Knowledge, Skills and Abilities:
You have delivered and supported commercial, enterprise software.
You have extensive experience in deploying applications and services to AWS, using CI/CD technologies, and operational monitoring technologies
You have a good understanding of both development and operational support
You are goal-oriented, self-motivated, and able to be successful in a schedule-driven, fast-paced, dynamic environment.
You possess excellent written/verbal communication and presentation skills
You're a tinkerer at heart with an innate ability to solve tough production problems.
Tech Stack:
Kubernetes and Docker (AWS EKS preferred)
Terraform · Chef & Ansible configuration management
Bamboo, Jenkins, Azure DevOps, GitHub Actions
AWS managed databases and caching services (RDS, ElastiCache, others)
Passenger or NGINX webserver
New Relic
Grafana
CloudWatch
Kibana
Education and Experience:
A Bachelor of Science degree in computer science or similar.
7+ years in a Systems Administration, or an Infrastructure Automation role.
AWS DevOps Certification + Experience
You like to teach as well as be taught.
You have demonstrated the ability to resolve issues.
Can-do attitude and proactive approach to solving problems
Excellent communication skills (written, verbal, presentation)
Devops Engineer
Tek Ninjas
Texas
Contract
You must create an Indeed account before continuing to the company website to apply
Job details
No matching job preferences
Job Type
Contract
Full Job Description
Devops
Remote
Duration: 6+ months contract
. Strong knowledge and experience of managing in Databricks and ADF.
Strong knowledge and experience of managing in APIM.
Microsoft Azure cloud management & administration including environment provisioning,?configuration, performance monitoring, policy governance, and security that is global in scale.
Design, develop, and implement highly available, multi-region solutions within Microsoft Azure.
Analyze existing operational standards, processes, and/or governance, present recommendations to modernize or improve, and then execute on said improvement.
Migrate existing infrastructure services to cloud-based solutions.
Manage security and access controls of cloud-based solutions.
Develop infrastructure as code (IaC) leveraging cloud native tooling to ensure automated and consistent platform deployments.
Develop & implement policy driven data protection best practices to ensure cloud solutions are protected from data loss.
Support cloud adoption of applications as they are being transformed and/or modernized.
Ensuring all infrastructure components meet proper performance and capacity standards.
Spectrum
Charlotte, NC 28273
Full-time
You must create an Indeed account before continuing to the company website to apply
Spectrum - Charlotte, NC 28273
No matching job preferences
Job Type
Full-time
JOB SUMMARY
DevOps Engineer has a passion for deploying software solutions that use the most current technologies and improve the customers’ experience. Responsible for both divisional and national product deployments, and supports server software installations performed by development, test, and deployment teams. Perform production support tasks including troubleshooting of system and data issues for both divisional and national systems. First line of contact for production issues.
MAJOR DUTIES AND RESPONSIBILITIES
Actively and consistently supports all efforts to simplify and enhance the customer experience.
Work with developers, testers, and deployment teams to create software deployment plans.
Write and update automated scripts for installation of server software products.
Configure necessary hardware/virtual machines.
Deploy code in cloud environments.
Work with developers and hardware teams to update infrastructure and OS for applications.
Perform lab installations and upgrades of server software products.
Work with developers and infrastructure teams to install or upgrade third party software.
Monitor systems performance, reliability, and daily data processing.
Participate in project-related stand-up meetings.
Review and provide feedback for all external facing user documentation, including on-line help. Provide documentation support as necessary.
REQUIRED QUALIFICATIONS
Required Skills/Abilities and Knowledge
Ability to read, write, speak and understand English
Extensive experience packaging and delivering software to a production environment.
Well-versed in automating software deployments using tools (Puppet, Chef, Python, Ansible).
Familiar with technology (inputs, outputs, and processing flows), and ability to clearly communicate that knowledge. Ability to problem solve, identifying and resolving complex issues as part of a team.
Experience with software source control tools (Perforce, GitHub).
Experience with Linux shell scripting environments like bash.
Experience in AWS, Containers.
Ability to write clear technical documentation for use by developers and testers.
Ability to work under limited direction, and handle multiple assignments simultaneously.
Demonstrated verbal and written communication skills.
Thorough understanding of the Agile Software Development Lifecycle (SDLC).
Demonstrated in-depth leadership with ability to facilitate team consensus, and interact with both leadership and implementation teams.
Required Education
Bachelor’s Degree or technical field or work experience
Required Related Work Experience and Number of Years
Dynamic scripting languages - 3
Deploying software - 3
Linux or other Unix systems - 3
Senior DevOps Engineer
Maximus
Remote
Up to $150,000 a year - Full-time
You must create an Indeed account before continuing to the company website to apply
Senior DevOps Engineer
Maximus - Remote
Job details
No matching job preferences
Salary
Up to $150,000 a year
Job Type
Full-time
Benefits
Pulled from the full job description
Disability insurance
Health insurance
Paid time off
Retirement plan
Full Job Description
Job Description Summary:
We are seeking a Sr. DevOps Engineer to join our team supporting our Internal Revenue Service (IRS) client to help improve the IRS’ Technology Infrastructure services. The engineer will provide support to infrastructure and application teams by ensuring that the systems and production environment is optimized in accordance with organization standards while meeting requirements. The candidate must have knowledge and experience using the agile methodology with an emphasis on automation, continuous integration and continuous delivery.
The Sr. DevOps Engineer will work closely with enterprise configuration management team to maintain, build CI/CD pipelines, design, deploy and maintain virtual desktop infrastructure solutions, and review existing technology implementations and suggest and implement improvements and upgrades.
Location of work is remote in US but candidates ideally will be within driving distance to IRS Federal Buildings in Austin, TX, Farmers Branch, TX or Lanham, MD. There may be meetings that require in person attendance occasionally.
Position is contingent on funding.
Provide technical thought leadership based on DevOps Handbook to collaboratively work with a team (as an engineer) that maintains an enterprise Jenkins pipeline.
Integrate DevOps technical components with Enterprise IRS initiatives like the Enterprise Container Platform (platform as a service) and Infrastructure as Code initiative (IAC where V-realize is one of the targeted tools)
Write scripts and small utilities, automate deployments, and evaluate pipelines against industry and in-house standards
Follow and automate steps involved with onboarding projects, document the tasks, implement needed enhancements; enforce and develop standards related Trunk- based development and the definitions of done
Collaboratively work with existing team leveraging agile methodology (ex: participate in daily stand ups and other scrum ceremonies)
Conduct tool evaluations on DevOps software components and help integrated them with CICD pipeline
Support team members on infrastructure maintenance, to include upgrades, plugin installations, security remediation activities, migrations, and IRS ticketing needs in support of any of those efforts
Provide hands-on assistance to an assigned group of application development project teams in support of implementation of the CI/CD pipeline. This includes help with planning, implementation, troubleshooting, metrics, and conducting final retrospectives. Train the project teams so they can independently maintain their pipeline(s).
Maintain the currently developed pipeline for tier I and tier II software applications. All AD projects that join CICD will be onboarded in Sandbox and through Production untouched by human hands which constitute successful implementation of DevOps tools and methodology.
Educate IRS ECP stakeholders on best practices on container adoption in areas of security, processes/procedures, maintenance, etc.
Support development of the DevOps Automated Testing Strategy document and identification and onboarding of projects for automated regression testing
Hands on experience with some of these tools: Jenkins, Sonar, Maven scripting tool, AppScan, Jacoco, bash/c/shell scripting, python, Ruby, node.js, Ansible Tower, NexusPro, NexusIQ, Git/GitHub/GitLab, ServiceNow
Good understanding or experience with Open Shift platform
Assist with capturing metrics for CI/CD and DevOps initiatives
Perform knowledge transfer by partnering with IRS employees to share technical expertise, and knowledge acquired through supporting the technical discussions, processes, and delivery of the CI/CD pipeline and all integration points
Propose and develop DevOps CI/CD pipeline orchestration solutions using Jenkins and Ruby Scripting
Develop and propose containerization solutions compatible with IRS DevOps Practice
Develop and propose IaC and PaaS solutions compatible with IRS DevOps Practice
Evaluate software tools for orchestration, IaC, PaaS, and containerization solutions
Troubleshoot orchestration, IaC, PaaS, containerization, CI/CD, and automated testing solutions
PROJECT QUALIFICATIONS
Bachelor's Degree from an accredited college or university required, an additional four (4) years of related work experience can substitute for a degree; preferred degree in Engineering or Information Technology related major
At least five (5) years working in an IT platform engineering environment or related field
Experience with source control tools (GIT, SVN), Jenkins CI/CD, Ruby
Experience with system configuration, management, and virtualization and VMware environments
Experience using automation technologies such as Ansible to automate server builds, installation, configuration, and change management
Scripting experience (Ansible, Chef, Puppet, shell, WLST, JBoss CLI, Bash/KSH)
Working experience with cloud technologies
Experience working with automation Tools like Jenkins, Chef
Container-as-a-Service (CaaS) and Platform-as-a-Service (PaaS) experience using OpenShift by Red Hat
Experience working with API’s
Knowledge and experience with monitoring tools such as Splunk, Qualsys, New Relic and Cloudwatch
Good Knowledge in the design and configuration of cloud services
Experience working with application servers such as Apache Tomcat, Node.js, or RedHat JBoss
Understanding of network topology and technologies such as firewalls, load-balancers, DNS, VPN, NAT
Strong communication skills and ability to work across organizations
In-depth understanding of the software development life cycle for implementing Enterprise level systems
Excellent attention to detail capability
Excellent verbal and written communication skills
Ability to work in a fast-paced, dynamic environment
Ability to multi-task and have work assignment for several high priority, urgent activities ongoing simultaneously
Ability to interface with all levels of management
Ability to perform complex tasks with minimal supervision and guidance
Excellent time management, scheduling, and organizational skills
Ability to work well independently or in a team setting
Preferred: IRS system and data knowledge experience
Additional Requirements, as per contract/client:
Candidates must meet requirements to obtain and maintain an IRS Minimum Background Investigation (MBI) clearance (active IRS Moderate Risk MBI clearance is a plus)
Candidates must be a US Citizen or a Legal Permanent Resident (Green Card status) for 3 years, and be Federal Tax compliant
Job Summary:
Essential Duties and Responsibilities:
Develop and implement the configuration management system which supports the enterprise software development life cycle (SDLC).
Manage source code within the Version Control System (i.e. branching, sync, merge), compile, assemble and package software from source code.
Work with AEG to perform and validate installations/upgrades/deployment.
Participate in defining and providing guidance on standards/best practices.
Develop automation scripts for build, deployment, and versioning activities.
Research and resolve technical problems associated with version control and continuous integration systems.
Minimum Requirements:
Bachelor's degree in related field.
5-7 years of relevant professional experience required.
Equivalent combination of education and experience considered in lieu of degree.
5+ years’ experience with SVN administration.
5+ years J2EE application experience.
Skilled with scripting languages; Ant, Jython, bash, Groovy, etc.
DevOps / Containerization technology experience (Docker, Kubernetes, PCF).
Knowledge of Agile development and Continuous Delivery methodologies.
Experience with continuous integration environment utilities, preferably Jenkins.
AWS Certification required.
MAXIMUS Introduction: Since 1975, Maximus has operated under its founding mission of Helping Government Serve the People, enabling citizens around the globe to successfully engage with their governments at all levels and across a variety of health and human services programs. Maximus delivers innovative business process management and technology solutions that contribute to improved outcomes for citizens and higher levels of productivity, accuracy, accountability and efficiency of government-sponsored programs. With more than 30,000 employees worldwide, Maximus is a proud partner to government agencies in the United States, Australia, Canada, Saudi Arabia, Singapore and the United Kingdom. For more information, visit https://www.maximus.com. EEO Statement: EEO Statement: Active military service members, their spouses, and veteran candidates often embody the core competencies Maximus deems essential, and bring a resiliency and dependability that greatly enhances our workforce. We recognize your unique skills and experiences, and want to provide you with a career path that allows you to continue making a difference for our country. We’re proud of our connections to organizations dedicated to serving veterans and their families. If you are transitioning from military to civilian life, have prior service, are a retired veteran or a member of the National Guard or Reserves, or a spouse of an active military service member, we have challenging and rewarding career opportunities available for you. A committed and diverse workforce is our most important resource. Maximus is an Affirmative Action/Equal Opportunity Employer. Maximus provides equal employment opportunities to all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status or disabled status. Pay Transparency: Maximus compensation is based on various factors including but not limited to job location, a candidate's education, training, experience, expected quality and quantity of work, required travel (if any), external market and internal value analysis including seniority and merit systems, as well as internal pay alignment. Annual salary is just one component of Maximus's total compensation package. Other rewards may include short- and long-term incentives as well as program-specific awards. Additionally, Maximus provides a variety of benefits to employees, including health insurance coverage, life and disability insurance, a retirement savings plan, paid holidays and paid time off. Compensation ranges may differ based on contract value but will be commensurate with job duties and relevant work experience. An applicant's salary history will not be used in determining compensation. Maximus will comply with regulatory minimum wage rates and exempt salary thresholds in all instances. Posted Max: USD $150,000.00/Yr. Posted Min: USD $52,100.00/Yr.
Software Engineer -- Cloud Infrastructure / DevOps
Cradlepoint
Los Gatos, CA 95030
Remote
$103,950 - $186,300 a year - Full-time
You must create an Indeed account before continuing to the company website to apply
Job details
No matching job preferences
Salary
$103,950 - $186,300 a year
Job Type
Full-time
Benefits
Pulled from the full job description
401(k)
401(k) 4% Match
401(k) matching
AD&D insurance
Disability insurance
Employee assistance program
Family leave
Health insurance
Health savings account
Paid parental leave
Paid time off
Parental leave
Vision insurance
Full Job Description
Overview:
This is a Remote Opportunity in Boise, ID or Los Gatos, CA
Cradlepoint – a part of Ericsson – was born in Boise and built for wireless. We are a team of authentic, hard-working, and innovative people driven by a shared vision to Connect Beyond the limits of wired networks. We help customers — big and small, across industries and around the world — utilize LTE and 5G cellular technology to connect people, places, and things, anywhere. We’re at the forefront of the Wireless WAN and 5G — the next big waves in networking — and we remain as hungry and humble as the day we started. If you’re hungry to be part of something big, come join us.
Responsibilities:
How Will You Contribute to the Company?
As part of the Cloud Infrastructure/DevOps team, you will work with an incredible group of experienced DevOps engineers who know how to build scalable, cloud-native infrastructure and software deployment pipelines using Kubernetes. You will be an integral part of a global team responsible for creating, managing, and scaling this public cloud infrastructure.
What Will You Do?
Design, develop, and automate scalable public cloud infrastructure for micro-service deployment
Help deploy and manage the global deployment of cloud infrastructure and applications, following the best security & compliance practices
Continuously monitor and improve our complex CI/CD pipeline
Automate Public Cloud service orchestration
Qualifications:
Minimum Qualifications:
BS/MS in Computer Science or related technical field
Two to five (2-5) years of experience in DevOps and Public Cloud Infrastructure
Experience with Infrastructure as Code technology and tools, preferably, with Terraform and Terragrunt
Background in building complex CI/CD pipelines using Git and Jenkins
Expertise in public cloud services and technologies, preferably AWS
Bonus Points:
Background in Kubernetes, Helm, and Containerization technologies
Note: Did you know that women and other marginalized groups often hold back on applying to jobs if they don’t meet 100% of all listed requirements? We don’t want you to hold back! If you don’t check every single box above but still feel like you could successfully do the work, we encourage you to apply! We’d love to connect and see how you could add to our team.
Why Cradlepoint?
At Cradlepoint, we celebrate & support the unique contributions of our vibrant, global employee base. We know that our differences of perspective inspire creativity and drive innovation. Our culture is based on a set of shared values designed to unite and enable our community to thrive.
At Cradlepoint, we are hungry & humble . Our values drive everything we do.
Respect: we seek to understand, value all perspectives and celebrate our differences.
Integrity: we take ownership and accountability and do the right thing - even when it’s hard.
Perseverance: we accept and embrace change and have a passion to win.
Professionalism: we build trust by delivering on our promises and working collaboratively to hold each other accountable.
Our focus areas define how we work:
Cooperation & Collaboration: we are one team.
Courageous, Fact-based Decisions: be a curious learner and ask questions.
Execute with Speed: empower employees and guide.
Speak-up environment: dare to disagree.
Empathy & Humanness: care for each other and support work life balance.
We are creating the future of global connectivity & building the new network for the new enterprise. Come join us. You belong here.
Compensation and Benefits at Cradlepoint
At Cradlepoint, we know that our people are the key to our success. We offer a competitive compensation and benefits package to help with your individual needs and goals.
Your Pay:
The salary range for this position is listed below. The actual salary offered is dependent on various factors including, but not limited to, location, the candidate’s combination of job-related knowledge, qualifications, skills, education, training, and experience.
$103,950 - $186,300 / year
Your pay also includes the opportunity for an annual bonus. This variable pay opportunity is dependent upon the attainment of agreed to goals and objectives as determined by Cradlepoint’s Senior Leadership team. Certain eligibility and pro-ration rules apply.
Your Health:
Cradlepoint offers excellent, competitive employee benefits, such as: subsidized, nationwide PPO medical benefit options including a low-deductible Point of Service Plan and a qualifying High Deductible Health Plan (HDHP), with a generous company-provided HSA contribution. For California employees, we offer a subsidized HMO option through Kaiser. Cradlepoint also offers subsidized dental and vision coverage.
Your Financial Security:
We invest in both your short and long-term financial wellbeing. Cradlepoint’s 401(k) plan has a 4% company match and immediate vesting. Employees will also receive company-paid employee basic life and AD&D insurance and company-paid disability benefits.
Your Time:
Your work-life balance is important to us. Cradlepoint provides generous paid time off, including: 15 days of Flexible Time Off (FTO), four paid quarterly well-being days, and 11 paid annual holidays (includes nine company holidays and up to two floating holidays). Please note that an employee’s FTO balance and floating holidays may be prorated in the first year, based on start date. Cradlepoint also offers paid maternity-leave benefits and six weeks 100% paid family leave for all employees.
Additional Benefits:
Cradlepoint offers other company-paid benefits such as a comprehensive Employee Assistance Program, a free Headspace membership, LinkedIn Learning access, Talkspace mobile therapy, and volunteer paid time off.
#LI-Remote
#LI-TS1
Cradlepoint’s Diversity, Equity, Inclusion, and Belonging mission is to create an inclusive work environment where all employees’ differences are celebrated, their thoughts matter, and everyone feels safe to bring their authentic selves to work. We’re proud to be an equal opportunity employer and aim to attract, develop, and engage top talent from a diverse candidate pool. It is our policy and commitment to provide equal opportunity employment for all persons and not discriminate in employment decisions by placing the most qualified person in each job, without regard to any other classification protected by federal, state, or local law
Sr. DevOps Engineer
TechMatrix Inc
Plano, TX 75093
Contract
Responded to 51-74% of applications in the past 30 days, typically within 16 days.
Job details
No matching job preferences
Job Type
Contract
Shift and Schedule
8 hour shift
Full Job Description
Devops Engineer
Duration: 6 month contract
Location: Plano, Texas 2-3 days in office
Basically they are automating everything they have to cloud, terraform ,and AWS.
5+ years’ experience as a DevOps Engineer (Experience with CI/CD pipeline and various DevOps tools)
Prior experience working on Cloud Adoption projects, specifically with AWS
o Prior experience with Amazon S3 Storage very nice to have
Need to have cloud migration experience with Terraform
o Able to maintain and modify existing Terraform modules
Experience with Python or any shell scripts
Experience with Ansible
Willingness to learn/work with Splunk
Project Scope / role requirements:
This role will be part of a public cloud adoption acceleration program – AWS
Role is specifically focused on S3 AWS storage
Client already has a Terraform environment in place
o Candidate will be required to maintain existing terraform modules
Key skills:
Terraform- Module development and maintenance
AWS / S3*
CI / CDs
Jenkins
Git
BitBucket
Job Type: Contract
Pay: $80.00 - $85.00 per hour
Schedule:
8 hour shift
Ability to commute/relocate:
Plano, TX 75093: Reliably commute or planning to relocate before starting work (Preferred)
Work Location: In person