@mike89 I would agree with Paul that more information is needed here.
Also as he seems to suggest, a point with experience I can stress-- Yes, neural networks can do some amazing things-- But they are not an ‘all purpose sledge hammer’ for all occasions. In the end being a good Data Scientist, in my mind, is not just harnessing the latest and greatest techniques, but knowing which is the best/right tool for the job.
As an example, as part of a Capstone project previously from another course outside of here, before I started the Deep Learning Specialization I decided to work on a binary classifer for malware detection, based on something like 128 features and X = ~4500.
Part of the requirement was that we perform the analysis on two different models. I hadn’t worked with neural nets yet at that time, though I wished to learn, so originally chose it as my ‘second’ model (wrongly, as we will see expecting the results would be superior). So for the first, I just took a guess amongst the subset of traditional ML classification models and ran an SVM (Support Vector Machine).
Lo and behold, with just the SVM I was getting an accuracy of 98% on my test set ! I was rather blown away. I mean maybe I might have been able to achieve 99% with a Neural Net (or maybe not), but all the over head of a Neural Net just didn’t seem worth it at that point. Instead I ran a KNN (K-Nearest Neighbors, though I didn’t do as an extensive of a hyperparameters search and as expected achieved slightly less accuracy at 97%-- but still pretty good).
Further both these models ran really fast.
Plus, in the case of CNN’s, I might be wrong, but their structure an application really doesn’t seem like an appropriate fit for time-series analysis (assuming you’re acquiring ADC values from the photodiode)-- I mean a CNN seeks to ferret out features, and at the same time condense or compress them-- Not exactly something you want to be doing with values that have a direct relationship to time which is obviously a regular/fixed parameter.
Finally, on that vein here is another good video I found at the time that speaks in detail why bigger !always = better: