这会加载pyspark shell:
- (python-big-data)[email protected]:~/Development/access-log-data$ pyspark Python 3.6.5 (default, Apr 1 2018, 05:46:30) [GCC 7.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. 2018-08-03 18:13:38 WARN Utils:66 - Your hostname, admintome resolves to a loopback address: 127.0.1.1; using 192.168.1.153 instead (on interface enp0s3) 2018-08-03 18:13:38 WARN Utils:66 - Set SPARK_LOCAL_IP if you need to bind to another address 2018-08-03 18:13:39 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). Welcome to ____ __ / __/__ ___ _____/ /__ _ / _ / _ `/ __/ '_/ /__ / .__/_,_/_/ /_/_ version 2.3.1 /_/ Using Python version 3.6.5 (default, Apr 1 2018 05:46:30) SparkSession available as 'spark'. >>>
当你启动shell时,你会得到一个Web GUI查看你的工作状态,只需浏览到http:// localhost:4040即可获得PySpark Web GUI。
让我们使用PySpark Shell加载示例数据:
- dataframe = spark.read.format("csv").option("header","false").option("mode","DROPMALFORMED").option("quote","'").load("access_logs.csv")
- dataframe.show()
PySpark提供了已创建的DataFrame示例:
- >>> dataframe2.show()
- +----------------+----+----------+--------------------+
- | _c0| _c1| _c2| _c3|
- +----------------+----+----------+--------------------+
- |2018-08-01 17:10|www2|www_access|172.68.133.49 - -...|
- |2018-08-01 17:10|www2|www_access|162.158.255.185 -...|
- |2018-08-01 17:10|www2|www_access|108.162.238.234 -...|
- |2018-08-01 17:10|www2|www_access|172.68.47.211 - -...|
- |2018-08-01 17:11|www2|www_access|141.101.96.28 - -...|
- |2018-08-01 17:11|www2|www_access|141.101.96.28 - -...|
- |2018-08-01 17:11|www2|www_access|162.158.50.89 - -...|
- |2018-08-01 17:12|www2|www_access|192.168.1.7 - - [...|
- |2018-08-01 17:12|www2|www_access|172.68.47.151 - -...|
- |2018-08-01 17:12|www2|www_access|192.168.1.7 - - [...|
- |2018-08-01 17:12|www2|www_access|141.101.76.83 - -...|
- |2018-08-01 17:14|www2|www_access|172.68.218.41 - -...|
- |2018-08-01 17:14|www2|www_access|172.68.218.47 - -...|
- |2018-08-01 17:14|www2|www_access|172.69.70.72 - - ...|
- |2018-08-01 17:15|www2|www_access|172.68.63.24 - - ...|
- |2018-08-01 17:18|www2|www_access|192.168.1.7 - - [...|
- |2018-08-01 17:18|www2|www_access|141.101.99.138 - ...|
- |2018-08-01 17:19|www2|www_access|192.168.1.7 - - [...|
- |2018-08-01 17:19|www2|www_access|162.158.89.74 - -...|
- |2018-08-01 17:19|www2|www_access|172.68.54.35 - - ...|
- +----------------+----+----------+--------------------+
- only showing top 20 rows
我们再次看到DataFrame中有四列与我们的模式匹配,DataFrame此处可以被视为数据库表或Excel电子表格。
3、Python SciKit-Learn
任何关于大数据的讨论都会引发关于机器学习的讨论,幸运的是,Python开发人员有很多选择来使用机器学习算法。 (编辑:核心网)
【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!
|