. Design, implement and maintain all AWS infrastructure and application services related to an enterprise class application performance monitoring and workflow automation product within an AWS based managed service environment
. Design and implement availability, scalability, and performance plans for the AWS managed service environment
. Support multiple agile product development teams
. Continual re-evaluation of existing stack and infrastructure to maintain optimal performance, availability and security
. Implement process and quality improvements through task automation
. Institute infrastructure as code, security automation and automation or routine maintenance tasks
. Perform data migration from on premises environments into AWS, where needed
. Support the business development lifecycle (Business Development, Capture, Solution Architect, Pricing and Proposal Development)Automate deployment, monitoring, management and incident response for supporting the product
. Build and scale the technology infrastructure to meet rapidly increasing demand
. Collaborate with Development and QA to bring new features and services into production
. Develop and improve operational practices and procedures Act as an AWS cloud expert and provide support to the pre-sales in form of scoping, solution review, SOW review and assist sales with the end to end sales cycle
. Develop and improve operational practices and procedures Work on various client projects that includes cloud migrations and deployments, automation, cloud optimization, well architected review, security and best practices
. Work on major client projects and support them with the tasks related to the cloud infrastructure, Devops and automation
. Work internally with our product team to give feedback and ideas on developing Datavail s cloud monitoring tool that will be used by our customers
. Provide thought leadership by speaking in technical conferences, webinars, marketing events and heavily involved in publishing case studies, white papers and blogs
. Hadoop developer experience with minimum 4 years in Scala, Spark and Java mandatory
. Should have exposure to Hadoop framework components like Sqoop, Hive, Presto, Spark, HBase, HDFS.
. Fine tune Hadoop applications for high-performance and throughput. Troubleshoot and debug any Hadoop ecosystem runtime issues
. Hands on experience in configuring, and using Hadoop ecosystem components like Hadoop MapReduce, HDFS, HBase, Hive, Sqoop, Spark, Pig, Oozie, Zookeeper and Flume.
. Desired candidate should have strong programming skills on Scala to work on Spark
. Good experience on Apache Hadoop Map Reduce programming, PIG Scripting and Distribute Application and HDFS
. In-depth understanding of Data Structure and Algorithms.
. Experience in managing and reviewing Hadoop logfiles
. Minimum 4+ years of total working experience with Machine Learning and data-driven AI Technologies.
. Proficiency in at least one major machine learning framework, such as Tensorflow, PyTorch etc.Self-motivated in learning new technologies and curious in new directions.
. Experience in developing reusable ML models and assets, working closely with the engineering team to ensure scalability and modernization, as the models move into production.
. Analyzing the ML algorithms that could be used to solve a given problem and ranking them by their success probability
. Exploring and visualizing data to understand it, then identifying differences in data distribution that could affect performance when deploying the model in the real world
. Verifying data quality, and/or ensuring it via data cleaning and Defining validation strategies
. Defining the preprocessing or feature engineering to be done on a given dataset
. Training models and tuning their hyper-parameters
. Analyzing the errors of the model and designing strategies to overcome them
. Deploying models to production
Roles & Responsibilities:
. Proficiency in building large-scale personalization & recommendation algorithms and knowledge of information retrieval & ranking algorithms
. Experience in Deep Learning and classical Machine Learning, including but not limited to CNN/RNN architectures, Reinforcement learning, and Graph Neural networks.
. Experience with one or more Deep Learning packages including but not limited to TensorFlow and PyTorch
. We are looking for someone with proficiency in Python programming
. Solid understanding of relational databases, SQL, and No-SQL databases.
. Experience with unstructured data like images & text is a plus
. Strong interpersonal skills and the ability to naturally explain difficult technical topics to everyone from data scientists to engineers to business partners
. Should have 10+ years of experience in project management (Key people management and client management).
. Manage multiple engagements with 50+ software engineers, and deliver to contractual commitments while driving customer engagement & satisfaction. Professional Services Experience is preferred.
. Manage operational parameters like revenue recognition, resource sourcing and utilization, attrition, gross margin, appraisal cycles etc. and manage business growth effectively.
. Expertise in SDLC / PDLC using Agile and Waterfall methodologies with a broad appreciation of Architecture, Design, Infrastructure, Information Security & DevOps.
. Expertise in applying Industry best practices on project / program delivery, including leverage of Innovation & Centers of Excellence.
. Continuously improve total business value and strategic alignment of global delivery programs
. Ability to develop and represent breakthrough business cases & proposals for change / improvements to program including reviews with senior stakeholders.
. Hands on expertise in converting insights into business cases & proposals.
. Highly adaptable, a driver of change, and capable of quickly rallying teams / peer groups
. Implement highly secure public cloud infrastructure and services following repeatable and sustainable processes and ensure the availability, performance, and security of all cloud systems.
. Partner and collaborate with the software development and data warehouse teams to support customer systems, provide architectural guidance, select appropriate cloud technologies, and minimize delivery time.
. Provide data integrity through appropriate backup and disaster recovery solutions and support periodic testing and recovery exercises.
. Establish system monitoring, logging, and alerting; respond to, investigate, and address issues.
. Help define and document cloud best practices and assist with financial accounting and cost optimization.
. Be available on weekends once or twice each month to perform maintenance activities; comp off provided for any weekend time worked.
. 4-8 years' experience administering public cloud and/or IT infrastructure with at least 3 years supporting AWS-based solutions.
. In-depth expertise in using and managing AWS Compute (EC2, EKS/ECS, Lambda), Database (RDS, Redshift), Networking (VPC, Firewall, ELB, Transit Gateway), and Storage (S3, EBS/EFS) offerings.
. Experience with native AWS management and security tools, such as Control Tower, CloudWatch/CloudTrail, KMS/Secrets Manager, and GuardDuty, and proficiency with AWS Identity and Access Management (IAM).
. Familiar with DevSecOps, automation, and infrastructure-as-code approaches using technologies such as AWS DevOps, Systems Manager/CLI, Service Catalog, and CloudFormation.
. Knowledge of at least one scripting/programming language required; Python preferred.
. Configuration Management experience such as Ansible
. Strong knowledge in Docker and Kubernetes
. Strong knowledge in Weblogic, maintaining the cluster and performing server administration
. Strong knowledge in BitBucket
. Strong knowledge in Jira and automation using the CI/CD pipeline.
. Strong knowledge in Artifactory like JFrog or Nexus
. Have a passion for automation by creating tools using Python, Java or Bash
. Experience deploying and managing CI/CD pipelines.
. Have a strong experience in managing distributed computing systems, e.g., NoSQL, Cassandra, Hadoop
Salesforce Developer
Experience 3-5yrs
Location:Remote
. Minimum 3+ years of experience in Salesforce configurations & customizations.
. Should have Salesforce.com experience in Sales cloud, Service cloud, Apex, VF, Lightning
. Experience on Apex Classes,Triggers,Visualforce Pages, Triggers,Lightning(Aura/LWC), Batch Classes,Schedulers, REST/SOAP Web services, SOQL, Role,Profiles,Sharing Settings, Permission sets, Custom objects,fields,workflows
. Experience working with Lightning components and features
. Experienced in design and implementation of real-time and batch integration with Salesforce and other legacy systems using REST / SOAP.
. Experience working with CSS frameworks.
. Strong knowledge in HTML, Java script, Jquery, Angular JS or other java script frameworks
. Strong analytical and problem solving skills; experience in both process and solution analysis.
. Must have Salesforce Platform Developer I/II certifications (Additional certifications are added advantage)
. Communicate with project managers, product managers, administrators and other developers to design cohesive project strategies and ensure effective collaboration throughout all phases of development, testing and deployment.
. Exposure to Agile methodology.
RPA Developer
Experience 3-10yrs
Location:Hyderabad/Chennai
. Minimum 4+ years of overall experience.
. 4+ Years in RPA UiPath Development
. Experience in RE Framework
. Experience in minimum 2 areas - Web, Excel, Outlook, PDF, Citrix, Mainframe Automation, Salesforce
. Experience in Orchestrator, building attended and unattended Bot
. Knowledge in RPA infrastructure and readiness to learn more in this area.
. Experience in version control, reporting KPI's
. Good experience in RPA Solution Delivery Lifecycle
. Experience in PDD, SDD, Runbook/Playbook creation
. Problem-solving skills
. Excellent written and verbal communication
. Attention to detail
Data Analtyics engineer
Experience 3-10yrs
Location:Hyderabad/Chennai
. Working in a challenging, fast-paced environment to create a meaningful impact on your work
. Identify business problems & use data analysis to find answers Code and maintain data platform & reporting analytics
. Design, build and maintain data pipelines.
. Desire to collaborate with a smart, supportive engineering team
. Strong passion for data and willingness to learn new skills
. Experience with NoSQL Databases (i.e. DynamoDB, Couchbase, MongoDB)
. Experience with Data ETL tools (DBT and AWS Glue)
. Expert knowledge in SQL and deep experience managing relational databases such as PostgreSQL
. Experience coding in Python, Scala, or R
. Strong understanding of how to build a data pipeline