Making the most of our access to the API, in this article, I’ll carry out a thorough review that aims to score the main capabilities of GPT-3.
7:00 AM, after a poor night of sleep, the alarm goes off, oh god. I proceed with my regular email checking, oh wait. Something attracts my attention…OpenAI? My mood completely changes, in just one second. After a looong time as part of their waitlist, we had been selected to access the Beta version of GPT-3.
As promised — following the findings in my previous article — we’ll now focus on Custom Vision “compact” models. What are they? Use cases? What is their performance like compared with standard models?
A very interesting Custom Vision feature is the possibility of exporting a simplified version of a model to be run in small (IoT) devices or mobile phones outside the Azure environment.
When installed in a device, it would allow obtaining quick inferences (since it doesn’t need to hit an external API) without an internet connection. So an app could provide real-time features and/or have offline support.
In this article, I’ll explain how you can use a simple trick to replicate locally an architecture with multiple containers running on a single ECS task and an application load balancer performing content-based routing.
In our example, we run a blog platform similar to Medium (in React) and our backend architecture is based in microservices. So the API, composed of 3 main components, is accessible via a unique domain (https://api-example). Requests are routed to the correct service following some simple path rules.
The following diagram summarises it:
In this article, I’ll run through the possibilities and tradeoffs of Microsoft Custom Vision. I’ll use an old MIT research that aimed to classify indoor images to compare results and base the final conclusion on empirical observations.
Custom Vision has been on my radar for a while. The platform, created by Microsoft and part of the Azure ecosystem, allows users to easily upload and tag images to build and train custom Machine Learning (ML) models that can be used to perform classification and object detection. …
In this article, I’ll explain how data can help you to make decisions. As a fun example, I work on a quick project attempting to understand how we can maximise the number of likes for our Instagram posts by visualising the upload information via a heatmap, making use of Facebook Graph API, Python, Jupyter and Seaborn.
As you’ve probably already heard (like hundreds of times), information is power and it can be used to your benefit to drive your business decisions.
However, even when the information exists (it’s stored somewhere), we face several challenges…
…would have been inconceivable without the preconditions set up by DARPA and Caltech in 1958, but it’s not over yet. (In fact, in this area, we are only about a decade away from having a small fraction of human brain function replaced by AI.)
What can make all this progress possible is the diversity of the technologies we have available. From now on, each innovation, from leading artificial intelligence techniques like deep learning, to mobile technologies (like Fetch, our very own augmented reality technology), is making it faster and more economical to pursue various forms of human-computer interaction.
Get your apps to make use of your trained machine learning model via standard REST requests with SageMaker and TensorFlow.
When working on a project that requires building and training a custom machine learning model, TensorFlow makes the work much easier. It provides you with most of the tools you’ll need to:
TensorFlow’s possibilities are really well documented, from complete beginner to an advanced point of…
In this occasion, I’d like to share a couple of ways to enhance the deployment of your Lambda functions, aiming for its complete automation:
I guess this is the first question to ask when deploying Lambda functions via Serverless.
Through the Serverless configuration system, which in its most basic form consists of a single serverless.yml file, we are able to deploy our whole stack. …
In my last article I related how to request a GraphQL API provided by AppSync from AWS Lambda making use of Cognito User Pools for authentication. A quick summary of the steps followed would be as:
As it was already mentioned in the article, the main goal was achieved but there was significant room for enhancement. So, I’d…
I wasn’t always sure about AWS Lambda as the best option to provide the full back-end of a medium-large app. Nevertheless, my opinion has recently changed (probably thanks to the enthusiasm of my colleague Luke and the discovery of the framework Serverless) and while I won’t go through the advantages of using a serverless architecture in this article, I will admit I’ve become a true believer in its potential and I’m convinced it will be part of the chosen architecture solution for most of our apps in the future.
For our latest hackathon adventure, the team opted for a serverless…
Senior Full Stack Engineer at @gravitywelluk