Quality Assurance Tech
Job Reference: 20-06675
Type: Job Description: SERVICES TO BE PERFORMED
* Managing and maintaining pipeline environments, handling policy engine violations, vulnerabilities for Apollo environments
* Develop tests using Selenium based Ruby test framework, document and coordinate the deployment of manual and automated tests, and test mobile device web view and MShop.
* Design, develop, and execute quality assurance test cases
* Improving test coverage, reviewing and filling gaps in existing automation.
* Work closely with engineers, business owners, and customer service professionals throughout the development and testing life-cycle, including the requirements gathering and design.
* Use functional specifications to plan, create, or sign-off on test plans, test cases, and automated test cases.
* Provide regular, detailed status report on project progress, including test cases executed, bugs discovered, and bugs fixed.
* Experience in operations management to handle code deployments through pipelines
* Strong understanding of software testing, backend API testing as well as frontend UI testing
* Strong knowledge of QA methodology and tools.
* Develop tests using Selenium based Ruby test framework
* Ability to deal with ambiguity in a fast-paced environment where every change affects the experience of millions of customers
* Representing the customer, understanding how they use the system and including the most relevant end to end user scenarios in test plans and automation.
Job Reference: 20-05907
Type: Title: Software Developer/Client Engineer
We are looking for a skilled and experienced Software Developer Engineer to join our Client & NLP team. The work may be performed full-time onsite in San Jose, CA, depending on the shelter in place older, but most likely will be performed remotely. The candidate must work well in cross-functional teams of engineers, product managers, and applied researchers.
Build a gateway microservice for *** Client platform. A Gateway service is to resolve policy based resource starvation for backend Client microservices. Service to have support API splitter, API aggregation and API A/B testing patterns using industry best practices.
Collaborates with Client engineers and other senior architects for Client model onboarding, deployment, Testability and availability for *** internal clients.
Write, automate, and execute performance and load running test scripts to measure latency and throughput of Client backend microservices at the edge-gateway with/without cache.
Help analyze and troubleshoot performance bottlenecks at edge-gateway and Client backend microservices.
Create client integration script/sample code for internal clients to integrate their services to Client platform via new gateway.
Experience with programming in Java and Python.
Hands-on experience with Docker and Kubernetes is a must.
Experience as a performance test engineer, write, automate, and execute performance scripts.
3-5 years of experience in RESTful APIs and distributed systems using Spring framework.
Experience with CI/CD.
Experience writing test automation suites using JUnit and Mockito.
Proficiency in Jira, Wiki, git, GitHub and Postman, and SOAPUI.
Proficiency in JSON, SQL, and regular expressions.
Familiarity with agile software development lifecycle.
Excellent written and verbal communication skills.
Experience working with multiple stakeholders to produce and review documentation.
Experience with API documentation using swagger and familiar with openapi specification.
Familiar with api gateway cross cutting concerns such as security, authorization and Rate Limiting***
Familiar with Software Development Life Cycle (SDLC).
Bachelor of Science degree, preferably in a technical field.
Software Dev Engineer
Job Reference: 20-03037
Type: Duration:0-18 month(s)
Description/Comment:About Big Data Platforms
The Big Data Platforms team powers the most demanding Big Data applications in the industry on some of the largest Hadoop clusters ever built. Yahoo pioneered this level of scale with Hadoop, and *** continues to be a leader in this space. Our team is made up of several PMCs (Program Management Committee members) and Committers in key Apache open source projects like Hadoop, Storm, and Tez, just to name a few. Our leadership position keeps *** at the forefront of these projects, both directionally and technically. Our team structure encourages trust, learning from one another, having fun, and attracting people who are passionate about what they do. If you want to work with Hadoop, Storm, or Spark and get a deep understanding of cloud computing, we're the team for you.
We are looking for an experienced Java developer to work on a critical tool sitting at the heart of our grid infrastructure. The team maintains, operates, and develops our Apache NiFi deployment for the Big Data organization.
Even more significant than having used this specific tool is your desire and willingness to work with this technology on one of the core teams within the organization.
We are a fun group that's passionate about technology. We are committed to providing outstanding, timely customer support, so youll get to know lots of folks from across the organization who happen to be leaders in the Open Source and Big Data communities.
We work well with one another and could use another forward-thinking mind to help expand our team!
Knowledge of programming concepts, software architecture, networking and distributed systems, and UNIX/Linux environments
Demonstrable experience as an object-oriented developer using popular backend programming languages (Java, C++, C#)
A passion for elegant code
Outstanding interpersonal and communication skills
The following skills and experience are considered a plus:
Experience working within an enterprise level environment
Experience with one or more of the following: Hadoop Core, Pig, Hive, Hcat, Oozie, Hbase, Spark, or Storm
Previous NiFi experience would be incredible!