Hello you all,
since I am a very beginner of this very deep course on cousera.
I have a question that I think it hasn’t been properly answered yet.
If I want to compress a single file with 40gb lidar data(object) and I want to reduce its size to 200 mb by using autoencoders.
Is this possible with only one training_example, but with multiple iterations of the gradient calculations, in the input layer of the encoder?
And can you use this compressed encoder code to decode it into the identically origin data?
It would be nice from you, if you could help me out, to get a better understanding.
And sorry for any bad pronunciations, I am a german that lives there.
This is interesting idea, but our encoder-decoder network is not a loss-less function.
As you see, there are two steps, encoding and decoding. During encoding phase, it tries to “extract” key features to represent an original data. With that, it reduces dimensions, and creates small feature map (vector) from input data. This can be, of course decoded back. Or, is translated to different languages with similar meanings, etc. But, again, this decoding process is to use small feature map which includes several features, and tries to create something. We can train to generate a similar output, but, all are matter of parameter tuning to minimize the loss. It’s not a loss-less conversion.
Usually, compression/decompression is loss-less. Video encoder/decoder may not be loss-less, but generates quite similar video frames. If you expect your auto-encoder loss-less, I do not think it is possible by a neural network.
1 Like
Thank you very much for your detailed answer. Indeed I never thought, that a neural network could be loss-less. You have a very good hit, if your system surpasses human-level performance. What I was looking for, is the compression feature of autoencoders to shrink 3D LidAR Modells (e.g. mine-shafts, large industrial plants, etc.) tremendously down in order to upload the encoder code to the cloud. So if you want to review the origin data and not a sysntetic standardised modell, you can download the data in minutes and not in hours and get roughly the same view of the 3D LiDAR Modell as the one that you encoded before. It doesn’t matter if it has some random broken/missing features, more important is the fact, that you only have one example for training/test set.
https://www.airclip.de/lidar-fuer-vermessungsdrohnen
Just a confirmation…
Indeed I never thought, that a neural network could be loss-less.
As I wrote,
This is interesting idea, but our encoder-decoder network is not a loss-less function.
LiDAR data is actually a good data for the neural network. Typical usage is, object detection, object localization and object classification. If you take a next course, Convolutional Neural Networks, there is an assignment for the object detection by using YOLO which is the fast object detection algorithm. Data is an image data not LiDAR data, but it should be fan for you. 
Hi Mentors, and ML learning team
I have a quick question from where to obtain data of gray scale images to load and train data (Xtrain and ytrain). I have to build an autoencoder (with at least 4 encoding layers) for a dataset of 500 grayscale natural images of size 600 × 600. I have recently completed ML specialization, it looks like Unsupervised ML learning algorithm to train neural network but as this is my first task in real time I require some guidance from ML experts.
Will you please guide me from where to start.