Hi,
I’m Karthik and have taken the Deep Learning specialization course. My background is in ab-initio and device-level simulations for the semiconductor industry.
I was thinking about a toy project to further explore and solidify the concept I have learnt. For this I thought about classifying real numbers into int and float without using modulo or any such indicator that we trying to classify int vs float. All the information will be in the labelled dataset.
Has anyone tried such a problem or if it is impossible with neural networks/classifiers could you explain why and maybe suggest some other project?
If you are going to use a plain neural network, how are you going to present the numbers as training examples without them being defined as a floating point data type when you input the data?
Perhaps you could do this if the training examples were strings of text, and you used a Recurrent Neural Network to learn whether the strings included a decimal point or used scientific notation.
Thank you for your response.
Yes, my intent was to input numbers as floating point representation, where 1.00000 would be classified as “int” (label 1) while 1.00000000001 would be classified as “floating point” (label 0). Same applies to 34 vs 33.999999 for example. Basically, I want to design a network, which learns a pattern present in the labelled dataset without any other specification. In this case, the pattern is the modulo operation (cyclic pattern along the real number line). I was wondering how involved it might be. Perhaps I have gaps in my understanding as to the relationship between data representation and the learned pattern by the network.
Use of RNN is an interesting approach to classify a string into float or int. I’ll try to pursue that first. Thank you for the suggestion!