What is Max Absolute Scaler? Compare it with MinMax Normalization? Why scaling to [-1, 1] might be better than [0, 1] scaling? 

Within the options for Feature Scaling, ‘Max Absolute Scaler’ is another that is open to us, as we preprocess data for our Training Data.

Along with the majority of Feature Scaling techniques, this is a transformation applied to Numerical Features. Depending upon your particular usecase it may be required to ensure your data is in a format suitable for the algorithms you have selected.

‘Max Absolute Scaler’ can be considered a close relation with, and acts in a similar manner to, Min Max Scaler. The main difference between Max Absolute Scaler and Min Max Scaler is that Max Absolute Scaler is only applicable to data where the values are +ve. If you attempted to use -ve values then you would find that your modelling would not make sense. As in some use cases, for example Fraudulent Transaction Detection, if your data possesses outliers, anomalies or novel values Max Absolute Scaler also has the same drawbacks, due to the strict scaling, as Min Max Scaler.

Due to the mapping of minimum and maximum values Whereas Min Max Scaler can also be used with -ve values, by scaling with Max Absolute Scaler 

Author

Help us improve this post by suggesting in comments below:

– modifications to the text, and infographics
– video resources that offer clear explanations for this question
– code snippets and case studies relevant to this concept
– online blogs, and research publications that are a “must read” on this topic

Leave the first comment

Partner Ad
Find out all the ways that you can
Contribute