xgboost 分布式部署教程

日期: 2024-04-04 14:01:40|浏览: 115|编号: 42271

友情提醒:信息内容由网友发布,请自鉴内容实用性。

xgboost 分布式部署教程

分布式部署教程

是一个非常优秀的用于梯度提升学习开源工具。在多个数值算法和非数值算法的优化下(: A Tree ),速度非常惊人。经测试用小时才能train出GBDT( Tree)的数据量,使用一半的集群资源只需要10分钟。出于种种原因,我在环境上部署花了一个多月的时间,期间在 上提了很多问题,也替作者找过bug。今天特地写篇部署教程,方便有需要的同行。

注:

参数: 获取特定版本的

  mkdir xgboost-package
  cp -r xgboost xgboost-packages/

安装编译依赖的包

安装gcc-4.8.0

  cd gcc-4.8.2
  ./contrib/download_prerequisites
  # 建立一个目录供编译出的文件存放
  cd ..

建立编译输出目录

mkdir gcc-build-4.8.2

进入此目录,执行以下命令,生成文件(安装到${HOME}目录下)

cd  gcc-build-4.8.2
../gcc-4.8.2/configure --enable-checking=release --enable-languages=c,c++ --disable-multilib --prefix=${HOME}

编译

make -j21

安装

make

修改变量切换默认gcc版本

PATH=$HOME/bin:$PATH
cp -r ~/lib64 ~/xgboost-packages

安装cmake

tar -zxf cmake-3.5.2.tar.gz
cd cmake-3.5.2
./bootstrap --prefix=${HOME}
gmake
make -j21
make install

下载编译*

unzip hadoop-common-cdh5-2.6.0_5.5.0.zip
cd hadoop-common-cdh5-2.6.0_5.5.0/hadoop-hdfs-project/hadoop-hdfs/src
cmake -DGENERATED_JAVAH=/opt/jdk1.8.0_60 -DJAVA_HOME=/opt/jdk1.8.0_60
make
# 拷贝编译好的目标文件到xgboost-packages中
cp -r /target/usr/local/lib ${HOME}/xgboost-packages/libhdfs

安装

cd ${HOME}/xgboost-packages/xgboost
cp make/config.mk ./
# 更改config.mk 使用HDFS配置
# whether use HDFS support during compile
USE_HDFS = 1
HADOOP_HOME = /usr/lib/hadoop
HDFS_LIB_PATH = $(HOME)/xgboost-packages/libhdfs
#编译
make -j22

# 更改dmlc_yarn.py首行
#!/usr/bin/python2.7
# 更改run_hdfs_prog.py首行
#!/usr/bin/python2.7

# 添加必要参数
cd ${HOME}/xgboost-packages/xgboost/demo/distributed-training
echo -e "booster = gbtree\nobjective = binary:logistic\nsave_period = 0\neval_train = 1" > mushroom.hadoop.conf
# 测试代码 run_yarn.sh
#!/bin/bash
if [ "$#" -lt 2 ];
then
        echo "Usage:  "
        exit -1
fi
# put the local training file to HDFS
DATA_DIR="/user/`whoami`/xgboost-dist-test"
#hadoop fs -test -d ${DATA_DIR} && hadoop fs -rm -r ${DATA_DIR}
#hadoop fs -mkdir ${DATA_DIR}
#hadoop fs -put ../data/agaricus.txt.train ${DATA_DIR}
#hadoop fs -put ../data/agaricus.txt.test ${DATA_DIR}
# necessary env
export LD_LIBRARY_PATH=${HOME}/xgboost-packages/lib64:$JAVA_HOME/jre/lib/amd64/server:/${HOME}/xgboost-packages/libhdfs:$LD_LIBRARY_PATH
export HADOOP_HOME=/usr/lib/hadoop
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=/usr/lib/hadoop-hdfs
export HADOOP_MAPRED_HOME=/usr/lib/hadoop-yarn
export HADOOP_YARN_HOME=$HADOOP_MAPRED_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
# running rabit, pass address in hdfs
../../dmlc-core/tracker/dmlc_yarn.py  -n $1 --vcores $2\
    --ship-libcxx ${HOME}/xgboost-packages/lib64 \
    -q root.machinelearning \
    -f ${HOME}/xgboost-packages/libhdfs/libhdfs.so.0.0.0 \
    ../../xgboost mushroom.hadoop.conf nthread=$2 \
    data=hdfs://ss-hadoop${DATA_DIR}/agaricus.txt.train \
    eval[test]=hdfs://ss-hadoop${DATA_DIR}/agaricus.txt.test \
    eta=1.0 \
    max_depth=3 \
    num_round=3 \
    model_out=hdfs://ss-hadoop/tmp/mushroom.final.model
# get the final model file
hadoop fs -get /tmp/mushroom.final.model final.model
# use dmlc-core/yarn/run_hdfs_prog.py to setup approperiate env
# output prediction task=pred
#../../xgboost.dmlc mushroom.hadoop.conf task=pred model_in=final.model test:data=../data/agaricus.txt.test
#../../dmlc-core/yarn/run_hdfs_prog.py ../../xgboost mushroom.hadoop.conf task=pred model_in=final.model test:data=../data/agaricus.txt.test
# print the boosters of final.model in dump.raw.txt
#../../xgboost.dmlc mushroom.hadoop.conf task=dump model_in=final.model name_dump=dump.raw.txt
#../../dmlc-core/yarn/run_hdfs_prog.py ../../xgboost mushroom.hadoop.conf task=dump model_in=final.model name_dump=dump.raw.txt
# use the feature map in printing for better visualization
#../../xgboost.dmlc mushroom.hadoop.conf task=dump model_in=final.model fmap=../data/featmap.txt name_dump=dump.nice.txt
../../dmlc-core/yarn/run_hdfs_prog.py ../../xgboost mushroom.hadoop.conf task=dump model_in=final.model fmap=../data/featmap.txt name_dump=dump.nice.txt
cat dump.nice.txt

参考资料

提醒:请联系我时一定说明是从浚耀商务生活网上看到的!