EXPLORE PREMIER
OPPORTUNITIES

As a skilled professional seeking career growth, you deserve access to the best job opportunities available. Join Outdefine's Trusted community today and apply to premier job openings with leading enterprises globally. Set your own rate, keep all your pay, and enjoy the benefits of a fee-free experience.

career-heroJoin now
Back to jobs
logo
Senior AI Engineer

Digibee

Saas
201-500
Fort Lauderdale, Florida, USA
Apply Now

About the job

Overview:

About Digibee

Digibee is an iPaaS that scales integration workflows while reducing cost and technical debt. Rather than require specialized integration experts, Digibee lets every developer quickly build, test, deploy, govern, and monitor integrations across on-premise and cloud environments using a simple but powerful low-code interface.

Founded in São Paulo, Brazil, in 2017 and headquartered in Weston, Florida, our team is widely distributed throughout the Americas. In May of 2023, Digibee closed a Series B funding round of $60 million that is intended to drive our expansion in the United States.

About the role

We are seeking a highly skilled and innovative Senior AI Engineer to join our dynamic team. As a Senior AI Engineer, you will play a key role in overseeing the whole process of designing and implementing advanced state-of-the-art AI models. This includes classic Machine Learning models, Deep Learning, and specially Generative AI. Also, the Senior AI Engineer will play a key role in automating and orchestrating pipelines, using MLOps techniques to increase the efficiency of designing and serving models.

The successful candidate will collaborate with cross-functional teams, contributing to deliver cutting-edge AI functionalities in Digibee Integration Platform.

On a typical day, you will…

  • Collaborate with cross-functional teams to define goals, requirements, and deliverables and implement the features.
  • Research and benchmark how AI is being used in low-code softwares specially in the iPaaS industry, in terms of architecture, tech stacks, and other relevant technical aspects.
  • Research, design, and develop innovative generative AI models and algorithms, applying techniques such as Mixture of Experts, Fine-tuning, and RAG, in order to increase the quality of inference, performance, and scalability.
  • Design and deployment of complex cognitive architectures, using tools such as LangChain, vector databases and others to orchestrate Multi-LLMs third parties, chains, agents, and prompts.
  • Implement and optimize deep learning architectures for generative tasks, focused on text generation.
  • Research, design, and develop models and algorithms using machine learning and deep learning techniques.
  • Evaluate and assess models' performance, inference, scalability, and costs, making necessary adjustments to improve results.
  • Stay up-to-date with the latest advancements in generative AI trends and tools and contribute to the team's knowledge base.
  • Write clean, efficient, and maintainable code, following best practices and coding standards.
  • Document research findings, methodologies, and technical specifications.
  • Participate in code reviews and provide constructive feedback to peers.
  • Contribute to the development of tools and frameworks to facilitate AI research and development.
  • Interpret insights from our pre-sales, sales, customer success teams and clients to create AI-based features.

What you’ll need to bring

  • Masters or PhD degree in Computer Science, Engineering, or a related field.
  • Advanced or fluent English language proficiency.
  • Strong understanding of machine learning, deep learning, and generative models.
  • Proficiency in Python, TensorFlow, PyTorch, and other deep-learning related tools.
  • Familiarity with the Hugging Face Transformers library of pre-trained models, including GPTs, BERT, BERTa and others.
  • Experience developing and training deep learning models using large-scale datasets.
  • Solid understanding of neural network architectures (CNN, RNN, LSTM, Transformers), optimization techniques, and loss functions.
  • Familiarity with natural language processing (NLP) tools and frameworks.
  • Familiarity with tools to design and operate complex cognitive architectures, LangChain, vector databases, Multi-LLMs controllers.
  • Strong problem-solving skills and ability to think creatively.
  • Excellent communication and collaboration skills.
  • Ability to work independently and manage multiple projects simultaneously.
  • Experience working with cloud platforms dedicated to perform MLOps tasks such as SageMaker or Vertex AI.
  • Experience with platforms such as Prometheus, Grafana, etc to ensure comprehensive visibility into the performance, accuracy, and reliability of AI models and the underlying infrastructure
  • Experience in architecting complex software systems using microservices, serverless architectures, or event-driven architectures to support scalable AI solutions.
  • Knowledge of relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., MongoDB, Cassandra) databases, including schema design, query optimization, and data modeling techniques.
  • Experience in designing, developing, and consuming RESTful APIs and understanding of web services architecture. Knowledge of API security, authentication, and authorization practices.
  • Deep understanding of software development life cycle (SDLC) methodologies, including Agile, Scrum, and Waterfall, to efficiently collaborate in a fast-paced development environment.

Nice to have

  • Familiarity with manipulation of Open Source LLMs, using techniques such as merging and MoE.
  • Familiarity with graphs and GNN.
  • Knowledge of reinforcement learning techniques and their application in real-world scenarios, including model-based and model-free approaches, multi-agent systems, and deep reinforcement learning.
  • Familiarity with distributed computing systems and technologies such as Kubernetes, Docker, and cloud-based scalable infrastructure to manage the deployment and scaling of AI models.
  • Familiarity with data engineering practices, including the use of ETL tools, data warehousing solutions, and real-time data processing frameworks (e.g., Apache Kafka, Spark).

Location

Brazil / Remote

Skills required
Computer scienceMySQLPostgreSQLLLMsPython
Employee location
Fort Lauderdale, Florida, USA
Experience level
Not specified
Workplace type
remote
Job type
full time
Compensation
$150000 - 200000 /yr
Currency
🇺🇲USD

Become a trusted member, apply to jobs, and earn token rewards

backgroundtopCreate a profile

Create and customize your member profile.

backgroundtopComplete assessment

Earn 500 Outdefine tokens for becoming trusted member and completing your assessment.

backgroundtopApply for jobs

Once you are a Trusted Member you can start applying to jobs.

Apply Now