.
نظرسنجی جدید در ارتباط با اعلام نتایج لاتاری DV-2018
https://www.mohajersara.org/forum/thread-7409.html
.
نظرسنجی جدید در ارتباط با اعلام نتایج لاتاری DV-2018
https://www.mohajersara.org/forum/thread-7409.html
کانال تلگرام مهاجرسرا |
---|
![]() |
![]() |
![]() |
![]() |
![]() |
---|
![]() |
---|
گفتگوی آزاد در مورد اعلام نتایج (شمارش معکوس)
|
.
نظرسنجی جدید در ارتباط با اعلام نتایج لاتاری DV-2018 https://www.mohajersara.org/forum/thread-7409.html
2016-12-30 ساعت 23:51
(آخرین تغییر در ارسال: 2016-12-30 ساعت 23:58 توسط shebrahimi.)
(2016-12-30 ساعت 23:42)shebrahimi نوشته:(2016-12-30 ساعت 23:05)darabi نوشته:(2016-12-30 ساعت 14:20)shebrahimi نوشته: با سلام I would appreciate if you could let me know how to determine feature importance for XGBoostregressor while using pipeline. Besides, I have been suggested that If I build a good regression algorithm, then I predict values and predict bankruptcy when the value is below some threshold, which I tune on a hold out set to find a right balance between precision and recall. However, I couldn't understand what to do?
In fact, I tried "fit.feature_importances_" but this error is reported: AttributeError: 'Pipeline' object has no attribute 'feature_importances_' The same is true about plot_importance (fit): raise ValueError('tree must be Booster, XGBModel or dict instance') ValueError: tree must be Booster, XGBModel or dict instance The answer provided to the following post is similar to what I mean but unfortunately I couldn't get the idea. http://datascience.stackexchange.com/que...ikit-learn Really, I want to do something like this (Part: Feature Selection with XGBoost Feature Importance Scores) in order to see if my model improves or not. |