Yes, if you have a suitable set of training data.
Maybe its possible by using something like this.
Body Weight Prediction Model using Mid Upper Arm Circumferences and Knee Height in Adult | Katherina | Indonesian Journal of Public Health Nutrition (IJPHN) (ui.ac.id)
Also neck and waist sizes are used to determine body fat percentage.
Some different informations like these can be used together to predict weight in my opinion.
Thank you for your contribution
The deep learning model doesn’t understand or care what weight means, so your question is really “can a computer vision model be trained to predict a numeric output?”.
As @TMosh states above, the answer to that question is “Yes” , assuming of course you have sufficient labelled data and make sensible choices regarding your model architecture, hyperparameters and approach to training.
@mrtckaya I think what your link describes is maybe not a computer vision example. Seems like it is working from structured data, and thus a different type of machine learning problem and approach. Either one might be useful for predicting a person’s body weight when it cannot be measured directly.
Basically my question is like what you said. Thanks for your contribution
Even measuring height or hip size of a person from a photograph is not possible (try putting a pen inside a cap closing one eye), unless you know the real size of something like a button in the shirt. Binocular (double lens) vision photograph might produce something better. They are called depth-sensing cameras. (see: Applied Sciences | Free Full-Text | Human Height Estimation by Color Deep Learning and Depth 3D Conversion)
Creating a good model relies on having an appropriate training set.
There was an Amazon research paper from a little while ago around a computer vision model that took a front and profile image of a person, along with their height, weight and gender and outputted body measurements (bicep circumference, torso length, etc).
They were also nice enough to share their training and test datasets as well, so you can modify that data into your use case and try training a model.
I can’t put a link in this post, but if you Google
BodyM Dataset - Registry of Open Data on AWS, it should be the first, non-sponsored post.
Hope that helps and good luck!
I’m wondering ( anyway I m new to AI), how genAI multimodality ( text or speech to Image) models could sketch out perfect or expected 2D or 3D images of humans with a prompt like ‘a heavyweight, having good muscle, mid aged person eating a heavy burger’ Or ’ a fatty person with big tummy running on beach with a puppy’ et cetera. Just by recalling patterns in the trained unstructured data it can do so ?