What is the best way to get video signal from the field to the model on the cloud?

Hi,
I have cameras in the field, and I need to find a way to get the videos from the cameras to the cloud to get the inference from the model.
What is the best way to manage this, and how can it be scaled up?
Thanks a lot.

1 Like

Hello @Gilad_Lerman and welcome to DeepLearning.AI community

I would suggest you connect your cameras to your local computer, that way it will be much easier to access cloud inference services like Vertex AI Vision . If you’re not much familiar with model deployment methods please do check the last course of this specialization Machine Learning Engineering for Production (MLOps) Specialization.

Note In real-time systems cloud would not be a good option due to high latency compared to on-device inference

2 Likes

As @Isaak_Kamau explains, implementing this on the cloud is not practical. You will want to implement your computer vision model ‘on the edge’, meaning, right where the action is happening.

For computer vision you can use pre-built solutions like “OpenCV” or “TensorFlow Lite” to run your model on resource-constrained devices like an android device, or a raspberry device. Once you implement your model in one of this small devices, you can place the device right next to your camera and perform your inferences.

1 Like

Thank you @Isaak_Kamau and @Juan_Olano for your answers. So following your suggestions, if I do the inference on the edge then why do I need the cloud anyway? How do I control many cameras in a centralized way? and where do I store the images/Should I transfer them to cloud in order to use them for retraining the model?
Sorry for the many questions, I will check the last course of the MLOps specialization - but a quick start is always very helpful.
Thanks

1 Like

I think it depends on your specific use case.

I would use the cloud to:

  1. Send the result of the inference and apply some business rules there.
  2. Send snippets of video that support specific detected events (say your model infers an event that needs documentation, I would send that piece of video to support the inference).

There are surely many other reasons to include the cloud in a solution like this.

Now: Can you implement the production model in the cloud and broadcast from the cameras to the cloud to do the inference there? yes, you can. But I think that 1) transmitting video is expensive (Data and cost wise), 2) doing inference in the cloud from video in remote locations will probably have high latency, which may or not may be important to you.

For real time or close to real time inferences, I would go with and edge implementation of the model.

2 Likes

Hi there,

as the fellow mentors explained an edge deployment can be considered.

There are also cases where a (close to) real time streaming and processing is also possible in the cloud. Here some inspiration & info:

Note: that your architecture decision is usually a trade off in order to serve what you want to achieve and what’s important for you resp. your specific case and how you want create value.

This thread might be interesting for you, @Isaak_Kamau: AI in streaming Application - #2 by Christian_Simonis

Best regards
Christian

1 Like

Technically speaking, as @Christian_Simonis confirms, it is possible. One consideration I had when sharing my point of view is Cost.

If the cameras are to be unplugged, most probably they will be streaming using carrier data. That may scale the cost rather quickly. Also, carrier data may not be as fast as, say, WiFi. So keep in mind how the cameras are connected to cloud.

In the USA you can get good price and acceptable speed on data for a modem to transmit videos. For example, you can get 1Gb for around $10/month, and, if the modem works on 5G, the speed will be very good. Now, in rural areas, speed of carrier data may not be as fast, and it can be even unreliable. Finally, in my opinion, for a critical-mission system, I would probably go with an EDGE solution coupled with a CLOUD solution as a backup.

1 Like

@Christian_Simonis @Juan_Olano Can a web application work? Like making a web app that consume the model via API’s since @Gilad_Lerman is saying they need to “control many cameras in a centralized way” I am not familiar with web app but maybe you can suggest whether they are ways they can be implemented to solve the problem…

@Christian_Simonis I have seen the tread: AI in streaming Application, Maybe @Gilad_Lerman should also consider giving a hint of what his project involves so to get a more accurate answer

The components of this system would be:

  • Camera(s) - assuming this is, say, out in the open. If it is inside a bulding, you could probably use a webcam connected to a laptop or stand alone computer.
  • Edge device connected to camera OR Wifi card that connects camera to internet via WIFI OR modem that connects camera to internet via Carrier.
  • For local processing, a computing device (android? raspberry?) to do inferences OR for remote processing, a server that has the running model to do inferences.

A web site could be used to visualize the data, and probably to set parameters, but other than that, at this point, I see no other use :slight_smile:

@Gilad_Lerman Please refer to this! Let us know if you got any other question. Thank you @Juan_Olano and @Christian_Simonis for the amazing suggestions. cheers :smiley:

1 Like

Thank you all for your great suggestions. The cameras are taking pictures of farm animals and I want to get different features from these videos using DL and classic computer vision algorithms as well. Most features don’t need real-time processing so maybe 1 frame per minute is enough, but some of them do need more fps such as tracking the animal when it moves fast so maybe 10 frames per second. So I wonder whether I should include a processing device in each farming site and transmit only inferences to the cloud/server (need to think how to transfer movies as well from time to time for retraining). Or maybe transmit all movies to server/cloud with strong GPUs and analyze there. Which option is cheaper? Should I try and do it myself or better to use outsource to deal with the operations and the cloud…
Many thoughts, since I usually focused on the DL model and getting the best results out of it and don’t have much experience in deployment and cloud architecture - and every decision has a high pricing tag so it’s better to get some good practice tips.
Thanks again!

I’m a ML novice but I have experience in cloud architecture. One thing not mentioned in the above responses: a ‘good’ solution largely depends on the needs of your users or customers.

Does your application require real time inference? Then deploying to the edge makes sense. You mentioned that you don’t need real-time so that’s a factor.

Who is footing the bill for what? You mentioned cloud potentially being cheaper but cheaper for who? If you are selling hardware to users, having hardware that runs at the edge could be very expensive depending on the performance characteristics of your model.

Like @Juan_Olano mentioned if you are deploying to farms which are likely in remote areas you have to consider connectivity options, but you also should consider the schlep of pushing model or firmware updates out to your fleet of devices.

The right architecture is going to be based on the answers to questions that matter to your users.