Enhancing CNNs via structural intervention with XGBoost

Research output: Contribution to journalArticlepeer-review

Abstract

This research investigates a novel hybridization strategy between Convolutional Neural Networks (CNNs) and gradient-boosted decision trees to enhance image classification accuracy. While conventional approaches focus on optimizing either CNN architectures or machine learning algorithms independently, we propose that intervening in the architecture itself—by strategically replacing the dense classifier portion of the CNN with a tree-based learner—can yield superior results. In our study, we construct a CNN composed of three convolutional blocks, each followed by ReLU activation, max-pooling, and dropout layers. Instead of proceeding through the final dense layers, we extract features immediately after the Flatten layer and input them into an XGBoost classifier. Our experiments reveal that applying XGBoost to these flattened features results in a higher classification accuracy than the fully optimized CNN. Although other datasets were examined during initial testing, this paper focuses exclusively on CIFAR-10 for clarity and reproducibility. The findings suggest that performance gains can be achieved through structural interventions in model architecture, challenging the prevailing emphasis on end-to-end optimization.

Original languageEnglish
Article number025230
JournalEngineering Research Express
Volume7
Issue number2
DOIs
StatePublished - Apr 24 2025

Scopus Subject Areas

  • General Engineering

Keywords

  • XGBoost
  • convolutional neural networks
  • hybrid models
  • image classification
  • statistical analysis

Fingerprint

Dive into the research topics of 'Enhancing CNNs via structural intervention with XGBoost'. Together they form a unique fingerprint.

Cite this