体验CENTOS 8 并安装Docker

体验CENTOS 8 并安装Docker

安装CentOS 8

背景: 五一假期闲来无事,把自己的一台老本安装了centos8最新的系统玩玩。

  1. 去官网下载了最新的centos系统,太慢了,就到阿里云镜像中心下载,真是快,附上地址
1
https://mirrors.aliyun.com/centos/8.1.1911/isos/x86_64/

有包含所有软件的版本,有网络安装的版本,dvd版本的太大 7G,就选择了boot版本的,网络安装吧

  1. 坑一堆

    以前安装系统都是通过GRUB引导安装,现在命令统统忘记,折腾半天,总是启动失败 :(

    有知道的可以告诉我,感谢!

    太折腾了,就选择了一种简单的方式,UtrolISO烧录到U盘,直接安装,然而我太幼稚了,还是报错

    1
    no floppy found please insert floppy and go on....

    就是说没有软盘,我纳闷了,怎么会出现软盘错误。。。

    后来去看了isolinux的启动参数

    1
    2
    3
    4
    label linux
    menu label ^Install CentOS Linux 8
    kernel vmlinuz
    append initrd=initrd.img inst.stage2=hd:LABEL=CentOS-8-1-1911-x86_64-dvd quiet

    发现有个LABEL的选项,但是前面有个HD的选项,怎么会去找软盘,纳闷了,可能是遗留的问题,默认都是软盘,就把这个选项改下, 改成自己的U盘。我的u盘是hdb4;这里如果不知道可以让启动失败,等待几分钟后就会出现命令行模式,可以输入命令

    1
    ls /dev/hd* # 查看当前挂在的磁盘

    于是就把配置文件修改为,顺利启动

    1
    2
    3
    4
    label linux
    menu label ^Install CentOS Linux 8
    kernel vmlinuz
    append initrd=initrd.img inst.stage2=hd:/dev/hdb4 quiet

    由于我选的是网络安装版本的,后面会出现填写repo文件的网络地址,默认是会有个CLOSEST mirror,可以自己安装成功;如果网络失败,可以填写阿里云的repo文件

    https://mirrors.aliyun.com/centos/8/BaseOS/x86_64/os/

终于安装成功

安装docker

刚开始不知道阿里云也是有docker的镜像的,就从官网粘贴的配置docker镜像源

1
2
3
4
5
6
7
8
sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

sudo yum install docker-ce docker-ce-cli containerd.io

意外的是,竟然报错了

img

就是说containerd.io需要1.2.2-3以上的版本

难道镜像源中不满足,又纳闷,还是去官网下载了1.2.2-3版本,地址:

1
https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.1.el7.x86_64.rpm

当然,可能很慢,就找了阿里云的,自己选择版本

1
https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/

继续,下载了rpm包以后,就直接安装

1
rpm -ivh containerd.io-1.2.13-3.1.el7.x86_64.rpm

问题还是有的。centos8自带了一个runc也是用来跑容器的,跟docker是类似的,然而我想用docker,就直接卸载吧。

1
yum remove runc

再安装就ok了

同时,把yum的repo也改下,改成阿里云的,速度不是一般的快

1
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
1
sudo yum install docker-ce

终于安装成功,启动docker daemon

1
service docker start

普通用户运行docker

docker启动后,默认只有root才可以使用,其他都会提示 “no permission…”

看docker的通信sock,明显是属于root的

1
2
ll /var/run/docker.sock
srw-rw----. 1 root root 0 May 25 14:43 /var/run/docker.sock

添加docker用户组,把需要使用docker的用户添加到docker组

1
2
groupadd docker
gpasswd -a gfshi docker

重启docker服务

普通用户也可以使用了~

可以玩docker了

mysql8/mysql 5.7.24基本操作

最近使用ubuntu 18.04LST 安装mysql,设置密码,搞得我怀疑人生,搞半天是设置密码的方式在mysql8中改变;记录下问题的过程及设置密码

密码咋修改不了

之前在使用MySQL的时候,都是使用grant/alter/set 去修改root的密码的,今天安装了5.7.29,使用同样的方法修改,一直修改失败,系统账户root登录mysql不需要密码,普通用户无法登录

使用grant修改密码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
mysql> select User,Host,authentication_string from user;
+------------------+-----------+-------------------------------------------+
| User | Host | authentication_string |
+------------------+-----------+-------------------------------------------+
| root | localhost | *000BFFBA444B9D1E98861B0537ABAA4664A2CAA1 |
| mysql.session | localhost | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE |
| mysql.sys | localhost | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE |
| debian-sys-maint | localhost | *87BD4C29046C1B2C43D1FB7342F5F1BA286253BC |
+------------------+-----------+-------------------------------------------+
4 rows in set (0.00 sec)

mysql> grant all on *.* to 'root'@'localhost' identified by '123456';
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

执行没有返回错误,退出后重启MySQL,发现密码并没有修改;

于是觉得自己是不是没有操作成功,重新执行了两边,未果;

于是乎,换了一种

1
2
3
4
5
mysql> SET PASSWORD FOR 'root'@'localhost' = PASSWORD('123456');
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

同样,操作了两边,还是未果;

执行后看返回

后来在某次执行发现,有一个warning,两个执行都有返回,就看了这两个warnings

1
2
3
4
5
6
7
8
9
mysql>   grant all on *.* to 'root'@'localhost' identified by '123456';
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> show warnings;
+---------+------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Level | Code | Message |
+---------+------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Warning | 1287 | Using GRANT statement to modify existing user's properties other than privileges is deprecated and will be removed in future release. Use ALTER USER statement for this operation. |
+---------+------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

Using GRANT statement to modify existing user’s properties other than privileges is deprecated and will be removed in future release. Use ALTER USER statement for this operation.

就是说,grant已经要废弃了,要用alter User 修改

1
2
3
4
5
6
7
8
9
10
11
mysql> SET PASSWORD FOR 'root'@'localhost' = PASSWORD('123456');
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> show warnings
-> ;
+---------+------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Level | Code | Message |
+---------+------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Warning | 1287 | 'SET PASSWORD FOR <user> = PASSWORD('<plaintext_password>')' is deprecated and will be removed in a future release. Please use SET PASSWORD FOR <user> = '<plaintext_password>' instead |
+---------+------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

‘SET PASSWORD FOR = PASSWORD(‘‘)’ is deprecated and will be removed in a future release. Please use SET PASSWORD FOR = ‘‘ instead

这种设置密码的方式也已经被废弃了,要使用

SET PASSWORD FOR = ‘

不过warnings只是警告,执行是成功的,只是说下一版本要去掉PASSWORD,继续查找问题。

好,那就用明文的方式再试一次

1
2
mysql> SET PASSWORD FOR 'root'@'localhost'='123456';
Query OK, 0 rows affected, 1 warning (0.00 sec)

又出现一个warnings,那就再看下有什么warnings的

1
2
3
4
5
6
7
8
9
10
mysql> SET PASSWORD FOR 'root'@'localhost'='123456';
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> show warnings;
+-------+------+------------------------------------------------------------------------------------------------------------+
| Level | Code | Message |
+-------+------+------------------------------------------------------------------------------------------------------------+
| Note | 1699 | SET PASSWORD has no significance for user 'root'@'localhost' as authentication plugin does not support it. |
+-------+------+------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

SET PASSWORD has no significance for user ‘root’@’localhost’ as authentication plugin does not support it.

plugin不支持set password

那就看下当前用户的plugin,刚好user里面有一列plugin

1
2
3
4
5
6
7
8
9
10
mysql> select Host,User,plugin from user;
+-----------+------------------+-----------------------+
| Host | User | plugin |
+-----------+------------------+-----------------------+
| localhost | root | auth_socket |
| localhost | mysql.session | mysql_native_password |
| localhost | mysql.sys | mysql_native_password |
| localhost | debian-sys-maint | mysql_native_password |
+-----------+------------------+-----------------------+
4 rows in set (0.00 sec)

目测是这个plugin的问题,那就修改下plugin

1
2
3
user mysql;
update user set plugin="mysql_native_password" where User='root';
flush privileges;

修改完后重启mysql

终于成功修改密码

修改plugin成功后,这些密码修改的命令都可以使用了,终于,密码修改成功了

方法1,使用set password for , 使用明文
1
2
mysql> set password for 'root'@'localhost'='123456';
Query OK, 0 rows affected (0.00 sec)
方法2 ,alter user
1
2
mysql> alter user 'root'@'localhost' identified by 'P@55word';
Query OK, 0 rows affected (0.00 sec)

索引操作

  1. 查看索引
1
show index from [table name];
  1. 创建索引
1
2
3
4
5
ALTER TABLE 表名 ADD [UNIQUE | FULLTEXT | SPATIAL]  INDEX | KEY  [索引名] (字段名1 [(长度)] [ASC | DESC]) [USING 索引方法];



CREATE  [UNIQUE | FULLTEXT | SPATIAL]  INDEX  索引名 ON  表名(字段名) [USING 索引方法];
  1. 删除索引
1
2
3
4
5
DROP INDEX 索引名 ON 表名



ALTER TABLE 表名 DROP INDEX 索引名
  1. 查询语句是否命中索引
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
explain [sql]

mysql> explain select * from sentence_daily where date_str='2019-10-25';(命中索引date_str_type)
+----+-------------+----------------+------------+------+---------------+---------------+---------+-------+------+----------+-------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+----------------+------------+------+---------------+---------------+---------+-------+------+----------+-------+
| 1 | SIMPLE | sentence_daily | NULL | ref | date_str_type | date_str_type | 50 | const | 1 | 100.00 | NULL |
+----+-------------+----------------+------------+------+---------------+---------------+---------+-------+------+----------+-------+
1 row in set, 1 warning (0.01 sec)

mysql> explain select * from sentence_daily where type=1;(未命中)
+----+-------------+----------------+------------+------+---------------+------+---------+------+------+----------+-------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+----------------+------------+------+---------------+------+---------+------+------+----------+-------------+
| 1 | SIMPLE | sentence_daily | NULL | ALL | NULL | NULL | NULL | NULL | 101 | 10.00 | Using where |
+----+-------------+----------------+------------+------+---------------+------+---------+------+------+----------+-------------+
1 row in set, 1 warning (0.00 sec)

redis 高可用系统

Redis的高可用方案有三种:

  • master/slave主从方案

主节点提供读写操作,从节点提供读操作;

主节点故障,需要手动主备切换;

读写主还是从,由客户端控制;

  • 哨兵(sentinel)模式来进行主从替换以及故障恢复

在主从模式的基础上增加sentinel监听,主节点不可用,自动将slave提升为master;

客户端通过sentinel获取主/从节点;

  • redis cluster集群方案

去中心化,平等结构,数据槽;

这里主要介绍下,简单的主从模式与sentinel结构;

redis master/slave (replication)

master/slave 是为了解决点单故障而出现的,slave是对master的镜像;

主要工作机制

  • master-slave使用异步方式复制数据,同样使用异步的确认slave-to-master
  • 一个master可以拥有多个slave
  • slave也可以连接slave (4.0以后,所有的sub-slaves从maser接受复制数据)
  • master在做复制操作的时候,不会block master的读写操作
  • slave在复制操作过程中,根据配置使用旧的数据提供服务或者返回错误(配置replica-serve-stale-data);完成复制后,旧的数据会被删除,新的会被加载,期间会拒绝连接( block incoming connections)
  • slave可做横向扩容或者数据的安全性与高可用
  • master可关闭持久化,直接内存读写,slave开启持久化,保存数据;注意master重启,slave也会同步空的数据

redis replication 工作原理

  • 两个参数确定了同步的信息

replication ID: 伪随机数,与当前数据集生成,每个master具有

offset:master/slave 之间同步的位置记录

当slave连接到master后,使用PSYNC命令发送当前master 的replication ID与offset

  • 正常情况,master发送增量数据给slave,offset开始
  • slave发送无效的replication ID/master没有足够的backlog,会开始全量复制(full resynchronization)
全量复制的过程
  1. master接受到全量复制的请求,启动bgsave进程生成rdb file;同时,会开启一个buffer缓存所有新的写命令

关于buffer的配置:

client-output-buffer-limit replica 256MB 64MB 60

如果在复制期间,内存缓冲区超过60秒一直消耗超过 64MB,或者一次性超过 256MB,那么停止复制,复制失败。后两个参数是配合使用的,假如:消耗超过64MB 一直持续了59秒,但是60秒的时候不超过64MB了,那么就保持连接继续复制。

  1. bgsave完成后,master会将rdb file 给slave;slave会将其存到磁盘并加载到内存

  2. master 发送缓存的命令给slave

下面为master的日志记录,同步的过程

1
2
3
4
5
6
7
8
3829:M 10 Oct 2019 14:47:08.193 * Replica 10.226.50.31:6379 asks for synchronization
3829:M 10 Oct 2019 14:47:08.193 * Full resync requested by replica 10.226.50.31:6379
3829:M 10 Oct 2019 14:47:08.193 * Starting BGSAVE for SYNC with target: disk
3829:M 10 Oct 2019 14:47:08.194 * Background saving started by pid 4102
4102:C 10 Oct 2019 14:47:09.637 * DB saved on disk
4102:C 10 Oct 2019 14:47:09.637 * RDB: 6 MB of memory used by copy-on-write
3829:M 10 Oct 2019 14:47:09.685 * Background saving terminated with success
3829:M 10 Oct 2019 14:47:09.731 * Synchronization with replica 10.226.50.31:6379 succeeded
无盘化复制

正常情况下,全量复制需要在磁盘上创建rdb文件,并从磁盘上读取该文件发送给slave

磁盘效率低下,会严重影响master的效率,可配置不经过磁盘,直接发送

1
2
3
4
repl-diskless-sync yes

# 等待 5s 后再开始复制,因为要等更多 replica 重新连接过来
repl-diskless-sync-delay 5

注意:

  1. 不建议使用slave作为master的热备,关掉master的持久化;master宕机后会立马同步到slave导致数据丢失
  2. master的备份方案需要做,防止本地文件丢失;sentinel可以监控并切换,但有可能master failure还未检测到就已经重启,也会导致情况1
至少N个slave才能写成功

Redis 2.8开始,可以配置至少有N个slave连接的情况下,master才能接受写请求

由于redis使用异步复制,不能保证slave真正接受到了某个写请求,于是有可能有一个丢失窗口

工作特点:

  • slave每秒钟ping master,确认复制的数据的数量
  • master记录每个slave最后一次ping的时间
  • 用户可配置不大于最大秒数的延迟的最小的slave数量

如果有N个slave,延迟小于M秒,则写入成功;否则返回错误,写入失败

redis配置参数:

  • min-slaves-to-write <number of slaves>
  • min-slaves-max-lag <number of seconds>
过期key处理
  • replica不会过期key,只会等待master过期key。如果master过期了一个key,或者通过LRU淘汰了一个key,那么会模拟一条del命令发送给replica
  • master过期数据,因此在slave内存中可能存在逻辑上已经过期的数据还未及时被删除,slave中使用(its logical clock)only for read operations确定key不存在,不违反一致性
  • Lua脚本执行期间不会执行过期策略;发送同样的Lua脚本到slave保证一致性
heartbeat

主从节点互相都会发送 heartbeat 信息。

master 默认每隔 10秒 发送一次 heartbeat,replica node 每隔 1秒 发送一个 heartbeat。

redis replication 配置

配置是比较简单的,了解基本原理主要是为了在出现问题时,能够快速定位问题

1
2
3
4
5
6
7
replicaof 10.226.50.31 6381


# master 配置认证密码
# requirepass foobared
# slave 认证
# masterauth foobared

replicaof [ip] [port]

在5.0.4之前使用的时slaveof;

修改配置的bind/port/logfile/dbfilename/dir等配置参数,启动redis

1
2
redis-server /etc/redis-6379.conf
redis-server /etc/redis-6381.conf

启动后查看redis的replication 6379, 角色为slave;写入数据报错

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@localhost ~]# redis-cli 
127.0.0.1:6379> info replication
# Replication
role:slave
master_host:10.226.50.31
master_port:6381
master_link_status:up
master_last_io_seconds_ago:10
master_sync_in_progress:0
slave_repl_offset:128880
slave_priority:100
slave_read_only:1
connected_slaves:0
master_replid:a868c97d4f90ce5e77d6d6bd1787e8f61df35eff
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:128880
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:128880
127.0.0.1:6379> set a b
(error) READONLY You can't write against a read only replica.

master的角色为master,存在master replid,slave 同步的位置offset;写入数据成功,查看slave,数据已经同步

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@localhost ~]# redis-cli -p 6381
127.0.0.1:6381> info replication
# Replication
role:master
connected_slaves:1
slave0:ip=10.226.50.31,port=6379,state=online,offset=128978,lag=0
master_replid:a868c97d4f90ce5e77d6d6bd1787e8f61df35eff
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:128978
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:128978
127.0.0.1:6381> set a b
OK

redis sentinel

Sentinel(哨兵)是用于监控redis集群中Master状态的工具,sentinel哨兵模式已经被集成在redis2.4之后的版本中。

sentinel进程可以监视一个或者多个redis master/slave服务;当某个master服务下线时,自动将该master下的某个从服务升级为master服务替代已下线的master服务继续处理请求。

写博客太需要时间,未完,待续。。。

linux 一些配置

TCP最大监听队列修改

修改somaxconn参数值

该内核参数默认值一般是128(定义了系统中每一个端口最大的监听队列的长度),对于负载很大的服务程序来说大大的不够。一般会将它修改为2048或者更大。

1
echo 2048 > /proc/sys/net/core/somaxconn    # 临时修改,立马生效;系统重启丢失

在/etc/sysctl.conf中添加如下

1
net.core.somaxconn = 2048

然后在终端中执行

1
sysctl -p

redis (overcommit_memory)WARNING

redis 有时background save db不成功,log发现下面的告警,很可能由它引起的:

1
17427:M 17 Sep 2019 10:54:12.730 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.

看到这个顺道去查了下,发现

内核参数overcommit_memory

它是内存分配策略,可选值:0、1、2。

0 – 表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。
1 – 表示内核允许分配所有的物理内存,而不管当前的内存状态如何。
2 – 表示内核允许分配超过所有物理内存和交换空间总和的内存

Overcommit 与 OOM

Linux对大部分申请内存的请求都回复”yes”,以便能跑更多更大的程序。因为申请内存后,并不会马上使用内存。这种技术叫做Overcommit。当linux发现内存不足时,会发生OOM killer(OOM=out-of-memory)。它会选择杀死一些进程(用户态进程,不是内核线程),以便释放内存。

当oom-killer发生时,linux会选择杀死哪些进程?选择进程的函数是oom_badness函数(在mm/oom_kill.c中),该函数会计算每个进程的点数(0~1000)。点数越高,这个进程越有可能被杀死。每个进程的点数跟oom_score_adj有关,而且oom_score_adj可以被设置(-1000最低,1000最高)。

解决办法

同样是修改内核参数:

1
echo 1 > /proc/sys/vm/overcommit_memory # 临时修改,立马生效;系统重启丢失

编辑/etc/sysctl.conf ,新增下面一行,然后sysctl -p 使配置文件生效

1
vm.overcommit_memory=1

redis (Transparent Huge Pages) WARNING

redis 在centos 7 启动后有警告

1
17427:M 17 Sep 2019 10:54:12.730 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.

Transparent Huge Pages的一些官方介绍

The kernel will always attempt to satisfy a memory allocation using huge pages. If no huge pages are available (due to non availability of physically continuous memory for example) the kernel will fall back to the regular 4KB pages. THP are also swappable (unlike hugetlbfs). This is achieved by breaking the huge page to smaller 4KB pages, which are then swapped out normally.

禁止透明大页

1
2
[root@localhost 6379]# cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]

默认开启的,因此出现找不到文件,就是开启的

打开文件数的限制

用户级的文件数限制, 可以通过 ulimit -n 来查看

1
2
[root@localhost opt]# ulimit -n    # 查看当前用户能够打开的最大文件数
1024

而系统级别的文件数限制,则通过sysctl -a来查看

1
2
[root@localhost opt]# sysctl -a | grep file-max
fs.file-max = 284775

一般系统最大文件数会根据硬件资源计算出来的,如果强行需要修改最大打开文件数可以通过ulimit -n 10240来修改,当这种方式只对当前进程有效,如果需要永久有效则需要修改/etc/security/limits.conf(重启系统生效)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
#<type> can have the two values:
# - "soft" for enforcing the soft limits
# - "hard" for enforcing hard limits
#
#<item> can be one of the following:
# - core - limits the core file size (KB)
# - data - max data size (KB)
# - fsize - maximum filesize (KB)
# - memlock - max locked-in-memory address space (KB)
# - nofile - max number of open file descriptors
# - rss - max resident set size (KB)
# - stack - max stack size (KB)
# - cpu - max CPU time (MIN)
# - nproc - max number of processes
# - as - address space limit (KB)
# - maxlogins - max number of logins for this user
# - maxsyslogins - max number of logins on the system
# - priority - the priority to run user process with
# - locks - max number of file locks the user can hold
# - sigpending - max number of pending signals
# - msgqueue - max memory used by POSIX message queues (bytes)
# - nice - max nice priority allowed to raise to values: [-20, 19]
# - rtprio - max realtime priority
#
#<domain> <type> <item> <value>
#

#* soft core 0
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20
#@faculty hard nproc 50
#ftp hard nproc 0
#@student - maxlogins 4

# End of file

root soft nofile 65535 # 新增
root hard nofile 65535

各列意义:

1: 用户名称,对所有用户则“*”

2:soft 软限制/hard 硬件限制

3: 代表最大文件打开数

4: 数量

查看所有进程文件打开数
1
lsof | wc -l
查看某个进程打开文件数
1
lsof  -p [pid] | wc -l
查看系统中各个进程分别打开了多少句柄数
1
lsof -n|awk '{print $2}'|sort|uniq -c|sort -nr|more

SSH 密钥登录

  1. 生成公钥私钥对,linux下使用 ssh-keygen 命令生成,一路enter
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@localhost ~]# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): # 存放路径
Enter passphrase (empty for no passphrase): # 密码
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:N9Q7EJosUhKrKw+Utf5jcSgpg7fzJS5EtVfa5iinWOk root@localhost.localdomain
The key's randomart image is:
+---[RSA 2048]----+
| o.. . |
| .+ ..o o |
| oo..++ o . |
| +.o.o.o. . . |
|.+...o.+S o o |
|oo+++oo... . . |
|oo+*o++ |
| +=.E= |
| .++.. |
+----[SHA256]-----+
  1. 生成密钥对后,将公钥上传到服务器要登陆的用户 ~/.ssh/authorized_keys
1
2
[root@localhost ~]# cat ~/.ssh/id_rsa.pub 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCyUOANXFleDWrpJop7zrM2eZxrna8+uIiSbz72IRsAO3intbNkyHJpULv9yYUmT4iPf4Vn4QYmyogFhdpTktKSADBbyH5VXIGyFjbWeO7ix1iVr7YVQQ/4P/nELVytCUiIojFdZ+DvyYSariLzLuliFpYTMJ4jpmgpL/pAUobEazpGwjlRUOWik3+8kLGpsxHYJNUmrZKSnNaOYqDJVGfO3KBfozO+I5B/wcwSW5hje7Y5xyfdDzvuVh7uVmKQbjw3WoMiy64pTcKB1S3tQtPZfXnmOd3tUZU8SXSfcvhHdgrbG6kFBrJwqjpj/sE4zL9nWbSZlbpJD7gXtlkdrAH1 root@localhost.localdomain
  1. 登录服务器 ~/.ssh/authorized_keys
1
2
[root@wpspic6 opt]# cat ~/.ssh/authorized_keys
sh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCyUOANXFleDWrpJop7zrM2eZxrna8+uIiSbz72IRsAO3intbNkyHJpULv9yYUmT4iPf4Vn4QYmyogFhdpTktKSADBbyH5VXIGyFjbWeO7ix1iVr7YVQQ/4P/nELVytCUiIojFdZ+DvyYSariLzLuliFpYTMJ4jpmgpL/pAUobEazpGwjlRUOWik3+8kLGpsxHYJNUmrZKSnNaOYqDJVGfO3KBfozO+I5B/wcwSW5hje7Y5xyfdDzvuVh7uVmKQbjw3WoMiy64pTcKB1S3tQtPZfXnmOd3tUZU8SXSfcvhHdgrbG6kFBrJwqjpj/sE4zL9nWbSZlbpJD7gXtlkdrAH1 root@localhost.localdomain
  1. 测试登录,这里我用的是root

    1
    2
    [root@localhost opt]# ssh root@10.226.50.30 
    Last login: Tue Sep 17 16:23:57 2019 from 10.226.28.68 # 登录成功

    增加-v选项,输出登录过程

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    [root@localhost opt]# ssh root@10.226.50.30 -v
    OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017
    debug1: Reading configuration data /etc/ssh/ssh_config
    debug1: /etc/ssh/ssh_config line 58: Applying options for *
    debug1: Connecting to 10.226.50.30 [10.226.50.30] port 22.
    debug1: Connection established.
    debug1: permanently_set_uid: 0/0
    debug1: identity file /root/.ssh/id_rsa type 1
    debug1: key_load_public: No such file or directory
    debug1: identity file /root/.ssh/id_rsa-cert type -1
    debug1: key_load_public: No such file or directory
    debug1: identity file /root/.ssh/id_dsa type -1
    debug1: key_load_public: No such file or directory
    debug1: identity file /root/.ssh/id_dsa-cert type -1
    debug1: key_load_public: No such file or directory
    debug1: identity file /root/.ssh/id_ecdsa type -1
    debug1: key_load_public: No such file or directory
    debug1: identity file /root/.ssh/id_ecdsa-cert type -1
    debug1: key_load_public: No such file or directory
    debug1: identity file /root/.ssh/id_ed25519 type -1
    debug1: key_load_public: No such file or directory
    debug1: identity file /root/.ssh/id_ed25519-cert type -1
    debug1: Enabling compatibility mode for protocol 2.0
    debug1: Local version string SSH-2.0-OpenSSH_7.4
    debug1: Remote protocol version 2.0, remote software version OpenSSH_7.4
    debug1: match: OpenSSH_7.4 pat OpenSSH* compat 0x04000000
    debug1: Authenticating to 10.226.50.30:22 as 'root'
    debug1: SSH2_MSG_KEXINIT sent
    debug1: SSH2_MSG_KEXINIT received
    debug1: kex: algorithm: curve25519-sha256
    debug1: kex: host key algorithm: rsa-sha2-512
    debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
    debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
    debug1: kex: curve25519-sha256 need=64 dh_need=64
    debug1: kex: curve25519-sha256 need=64 dh_need=64
    debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
    debug1: Server host key: ssh-rsa SHA256:yKoRmi5QgIlXrrhYQcP5W0Mx2PhSoTsm5Z+DhdeYFpU
    debug1: Host '10.226.50.30' is known and matches the RSA host key.
    debug1: Found key in /root/.ssh/known_hosts:1
    debug1: rekey after 134217728 blocks
    debug1: SSH2_MSG_NEWKEYS sent
    debug1: expecting SSH2_MSG_NEWKEYS
    debug1: SSH2_MSG_NEWKEYS received
    debug1: rekey after 134217728 blocks
    debug1: SSH2_MSG_EXT_INFO received
    debug1: kex_input_ext_info: server-sig-algs=<rsa-sha2-256,rsa-sha2-512>
    debug1: SSH2_MSG_SERVICE_ACCEPT received
    debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
    debug1: Next authentication method: gssapi-keyex
    debug1: No valid Key exchange context
    debug1: Next authentication method: gssapi-with-mic
    debug1: Unspecified GSS failure. Minor code may provide more information
    No Kerberos credentials available (default cache: KEYRING:persistent:0)

    debug1: Unspecified GSS failure. Minor code may provide more information
    No Kerberos credentials available (default cache: KEYRING:persistent:0)

    debug1: Next authentication method: publickey
    debug1: Offering RSA public key: /root/.ssh/id_rsa
    debug1: Server accepts key: pkalg rsa-sha2-512 blen 279
    debug1: Authentication succeeded (publickey).
    Authenticated to 10.226.50.30 ([10.226.50.30]:22).
    debug1: channel 0: new [client-session]
    debug1: Requesting no-more-sessions@openssh.com
    debug1: Entering interactive session.
    debug1: pledge: network
    debug1: client_input_global_request: rtype hostkeys-00@openssh.com want_reply 0
    debug1: Sending environment.
    debug1: Sending env LANG = en_US.UTF-8
    Last login: Wed Sep 18 12:00:12 2019 from 10.226.50.31
    [root@wpspic6 ~]#
  1. 登录可能失败,可以在ssh命令后增加-v选项查看登录过程;
  2. 如果私钥是从其他地方拷贝,最好将id_rsa.pub也拷贝或者原来的删除,否则会影响登录;

redis 安装

安装

  1. 通过wget直接下载redis源码安装包,目前最新半5.0的
1
wget http://download.redis.io/releases/redis-5.0.5.tar.gz
  1. 解压
1
tar -xvf redis-5.0.5.tar.gz
  1. 进入源码目录
1
cd redis-5.0.5/src
  1. 编译
1
make && make install

运行

  • 安装成功后,配置文件/etc/redis.conf

默认daemon 为no,ip:127.0.0.1直接运行

1
redis-server /etc/redis.conf

运行后出现下图,则运行成功:

redis-start

  • 访问
1
2
3
4
5
6
[root@localhost ~]# redis-cli
127.0.0.1:6379> set foo bar
OK
127.0.0.1:6379> keys *
1) "foo"
127.0.0.1:6379>
  • 修改配置文件,daemon/ip,基本修改,重新运行便可启动了
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.

daemonize yes

# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 loopback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

bind 127.0.0.1

安装过程种可能出现的问题

  1. centos没有安装gcc,会出现
1
command not found: CC

安装gcc

1
yum install -y gcc
  1. make时报如下错误:
1
2
zmalloc.h:50:31: error: jemalloc/jemalloc.h: No such file or directory
zmalloc.h:55:2: error: #error "Newer version of jemalloc required"

原因是jemalloc重载了Linux下的ANSI C的malloc和free函数。解决办法:make时添加参数。

1
make MALLOC=libc
  1. make之后,会出现一句提示
1
Hint: To run 'make test' is a good idea ;)

但是不测试,通常是可以使用的。若我们运行make test ,会有如下提示

1
2
You need tcl 8.5 or newer in order to run the Redis test
make: *** [test] Error 1

解决办法是用yum安装tcl8.5(或去tcl的官方网站http://www.tcl.tk/下载8.5版本,并参考官网介绍进行安装)

1
yum install tcl

mongodb replset 搭建

简介

Mongodb 副本集由一组Mongod实例(进程)组成,包含一个Primary节点和多个Secondary节点或者Arbiter节点。
副本集的所有写操作都交给Primary,Secondary从Primary同步oplog到本实例,以保持复制集内所有成员存储相同的数据集,实现数据的高可用。

常见场景
  • 数据冗余
    集群可增加延迟写的节点,防止误操作
  • 读写分离
    适合读多写少的业务
典型结构

mongo结构

集群中的所有节点都可接受读操作;默认的驱动连接时读主节点,可设置read_prefrence指定

A replica set can have up to 50 members but only 7 voting members.

一个集群可以又最多50个节点,但有7个可投票的节点

节点角色

  1. 主节点(primary)

可接收所有的读写请求;同步oplog到从节点。一个Replica Set只能有一个Primary节点,当Primary挂掉后,其他Secondary或者Arbiter节点会重新选举出来一个主节点。

  1. 副本节点(secondary)

与主节点保持同样的数据集。当主节点挂掉的时候,参与选主。

  1. 仲裁者(arbiter)

不保持数据,不可能成为主节点,只进行投票选举主节点。为了打破投票中的偶数个节点的情况,Arbiter几乎没什么大的硬件资源需求.

Do not run an arbiter on systems that also host the primary or the secondary members of the replica set.

不要把仲裁节点与主节点或者从节点放在一台机器

两种结构

  • PSS

Primary + Secondary + Secondary模式,通过Primary和Secondary搭建的Replica Set

PSS

  • PSA

Primary + Secondary + Arbiter模式,使用Arbiter搭建Replica Set

PSA

偶数个数据节点,加一个Arbiter构成的Replica Set

选举机制

副本集通过 replSetInitiate 命令或 rs.initiate() 命令进行初始化。

初始化后各个节点开始发送心跳消息,并发起 Primary 选举操作,获得大多数成员投票支持的节点,会成为 Primary,其余节点成为 Secondary。

1
2
3
4
5
6
7
8
9
config = {
_id : "test_replset", # 副本集名称
members : [
{_id : 0, host : "rs1.example.net:27017"}, # 节点列表
{_id : 1, host : "rs2.example.net:27017"},
{_id : 2, host : "rs3.example.net:27017"},
]
}
rs.initiate(config)

假设复制集内投票成员(后续介绍)数量为 N,则大多数为 N/2 + 1,当复制集内存活成员数量不足大多数时,整个复制集将无法选举出 Primary,复制集将无法提供写服务,处于只读状态

  • Mongodb副本集的选举基于Bully算法,这是一种协调者竞选算法

  • Primary 的选举受节点间心跳、优先级、最新的 oplog 时间等多种因素影响

特殊角色
  • Arbiter

Arbiter 节点只参与投票,不能被选为 Primary,并且不从 Primary 同步数据。

  • Priority==0

Priority0节点的选举优先级为0,不会被选举为 Primary。

  • Vote==0

Mongodb 3.0里,复制集成员最多50个,参与 Primary 选举投票的成员最多7个,其他成员(Vote0)的 vote 属性必须设置为0,即不参与投票。

  • Hidden==true

Hidden 署行的 节点不能被选为primary(Priority 为0),并且对client 不可见。

  • Delayed==time seconds

Delayed 节点必须是 Hidden 节点,否则会出现数据不一致的情况; 其数据落后与 Primary 一段时间(可配置,比如半小时1小时等)

所有角色
Number Name State Description
0 STARTUP 还不是集群中的活跃点,读取配置阶段
1 PRIMARY 唯一的写节点
2 SECONDARY 从节点,主节点数据的拷贝
3 RECOVERING Members either perform startup self-checks, or transition from completing a rollback or resync.
5 STARTUP2 已经加入集群,同步数据
6 UNKNOWN
7 ARBITER 仲裁,投票选举primary
8 DOWN 不可达节点
9 ROLLBACK
10 REMOVED c曾经再集群,但被移除
触发选举条件
  • 新增一个节点到副本集
  • 初始化一个副本集
  • primary放弃角色,如 rs.stepDown()/rs.reconfig()
  • 从库不能连接到主库(默认超过10s,可通过heartbeatTimeoutSecs参数控制),由从库发起选举

副本集的自动failover是通过心跳检测实现的,进而实现高可用

mongo-failover

读写配置

Read Preference

All read preference modes except primary may return stale data because secondaries replicate operations from the primary with some delay. Ensure that your application can tolerate stale data if you choose to use a non-primary mode.

除了primary节点,其他的都有可能读到脏数据。保证你的app能够容忍脏数据

Read Preference Mode Description
primary 优先主节点,默认

Multi-document transactions that contain read operations must use read preference primary. All operations in a given transaction must route to the same member.
primaryPreferred 主节点优先,主不可用,读从节点
secondary 从节点读取
secondaryPreferred 从节点优先,没有从节点可用,读主节点
nearest 网络延迟最少的节点,不管主从

Write Concern

Write concern describes the level of acknowledgment requested from MongoDB for write operations

写操作的确认

选项如下:

1
{ w: <value>, j: <boolean>, wtimeout: <number> }

w: 写操作已经传播了几个mongod实例

j: 等待写操作写入磁盘journal

wtimeout: 写操作阻塞时间

value description
w: 1,默认值(写入主节点)
w: 0,不需要确认写操作,可能会抛出异常
w: >1,
majority 大多数数据可投票节点
写入一个标记的节点

Hidden, delayed, and priority 0 members with member[n].votes greater than 0 can acknowledge majority write operations.

Read Concern(一致性/隔离性)

level description
local
available
marjority
linearizable
snapshot

配置

  • db version v4.2.0
  • 配置文件,复制三份,修改

path/dbPath/pidFilePath

bindIp:如果外网,需要将ip绑定为0.0.0.0

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/

# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod-27017.log

# Where and how to store data.
storage:
dbPath: /var/lib/mongo-27017
journal:
enabled: true
# engine:
# wiredTiger:

# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod-27017.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo

# network interfaces
net:
port: 27017
bindIp: 127.0.0.1 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.


#security:

#operationProfiling:

replication:
replSetName: screensaver # replset名称,同一副本集取值相同
oplogSizeMB: 4096

#sharding:

## Enterprise-Only Options

#auditLog:

#snmp:

这里我搞了三个,启动

1
2
3
mongod -f /etc/mongod-27017.conf
mongod -f /etc/mongod-27018.conf
mongod -f /etc/mongod-27019.conf

随便进入一个实例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
> rs.status()
{
"operationTime" : Timestamp(0, 0),
"ok" : 0,
"errmsg" : "no replset config has been received",
"code" : 94,
"codeName" : "NotYetInitialized",
"$clusterTime" : {
"clusterTime" : Timestamp(0, 0),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
> rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "127.0.0.1:27017",
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1568625811, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1568625811, 1)
}
screensaver:OTHER> rs.status()
{
"set" : "screensaver",
"date" : ISODate("2019-09-16T09:23:35.861Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1568625814, 4),
"t" : NumberLong(1)
},
"lastCommittedWallTime" : ISODate("2019-09-16T09:23:34.040Z"),
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1568625814, 4),
"t" : NumberLong(1)
},
"readConcernMajorityWallTime" : ISODate("2019-09-16T09:23:34.040Z"),
"appliedOpTime" : {
"ts" : Timestamp(1568625814, 4),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1568625814, 4),
"t" : NumberLong(1)
},
"lastAppliedWallTime" : ISODate("2019-09-16T09:23:34.040Z"),
"lastDurableWallTime" : ISODate("2019-09-16T09:23:34.040Z")
},
"lastStableRecoveryTimestamp" : Timestamp(1568625814, 4),
"lastStableCheckpointTimestamp" : Timestamp(1568625814, 4),
"members" : [
{
"_id" : 0,
"name" : "127.0.0.1:27017",
"ip" : "127.0.0.1",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 19222,
"optime" : {
"ts" : Timestamp(1568625814, 4),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2019-09-16T09:23:34Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1568625811, 2),
"electionDate" : ISODate("2019-09-16T09:23:31Z"),
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
}
],
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1568625814, 4),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1568625814, 4)
}
screensaver:PRIMARY>

初始化后,该实例成为primary,单机实例,增加其他节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
screensaver:PRIMARY> rs.add('127.0.0.1:27018')
{
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1568625914, 2),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1568625914, 2)
}
screensaver:PRIMARY> rs.addArb('127.0.0.1:27019')
{
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1568625931, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1568625931, 1)
}

至此, PSA 模式的集群已经创建成功

hexo引用图片无法显示

最近使用hexo,但是发现在文章中引用本地图片时总是显示不出来。

插件安装与配置

首先我们需要安装一个图片路径转换的插件,这个插件名字是hexo-asset-image

1
npm install hexo-asset-image --save

但是这个插件的内容需要修改,不然还是会出现图片无法显示的问题。

打开node_modules/hexo-asset-image/index.js,将内容更换为下面的代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
'use strict';
var cheerio = require('cheerio');


function getPosition(str, m, i) {
return str.split(m, i).join(m).length;
}

var version = String(hexo.version).split('.');
hexo.extend.filter.register('after_post_render', function(data){
var config = hexo.config;
if(config.post_asset_folder){
var link = data.permalink;
if(version.length > 0 && Number(version[0]) == 3)
var beginPos = getPosition(link, '/', 1) + 1;
else
var beginPos = getPosition(link, '/', 3) + 1;
// In hexo 3.1.1, the permalink of "about" page is like ".../about/index.html".
var endPos = link.lastIndexOf('/') + 1;
link = link.substring(beginPos, endPos);

var toprocess = ['excerpt', 'more', 'content'];
for(var i = 0; i < toprocess.length; i++){
var key = toprocess[i];

var $ = cheerio.load(data[key], {
ignoreWhitespace: false,
xmlMode: false,
lowerCaseTags: false,
decodeEntities: false
});

$('img').each(function(){
if ($(this).attr('src')){
// For windows style path, we replace '\' to '/'.
var src = $(this).attr('src').replace('\\', '/');
if(!/http[s]*.*|\/\/.*/.test(src) &&
!/^\s*\//.test(src)) {
// For "about" page, the first part of "src" can't be removed.
// In addition, to support multi-level local directory.
var linkArray = link.split('/').filter(function(elem){
return elem != '';
});
var srcArray = src.split('/').filter(function(elem){
return elem != '' && elem != '.';
});
if(srcArray.length > 1)
srcArray.shift();
src = srcArray.join('/');
$(this).attr('src', config.root + link + src);
console.info&&console.info("update link as:-->"+config.root + link + src);
}
}else{
console.info&&console.info("no src attr, skipped...");
console.info&&console.info($(this));
}
});
data[key] = $.html();
}
}
});

打开_config.yml文件,修改下述内容

1
2
post_asset_folder: true
url: <你的github page地址或者其他域名>

重新生成

修改后,新建博客
1
hexo new test

会发现在source/_posts目录中,新建了test.md与同名的文件夹(test)

重新生成html文件,看到输出
1
update link as:-->//<url>/2019/09/11/xxxx/1568192562650.png

访问即可看见图片出来啦!

jenkins

jenkins 安装

jdk安装
  1. 采用java8, 到官网下载jdk
  2. 解压配置, JAVA_HOME PATH,执行如下
1
2
3
4
[root@wpspic5 ~]# java -version 
openjdk version "1.8.0_161"
OpenJDK Runtime Environment (build 1.8.0_161-b14)
OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode)
Jenkins 安装
  1. 下载RPM包安装

jenkins安装也比较简单,有相应的rpm安装包

官网地址 选择合适的版本,下载安装

  1. yum安装

导入yum源

1
2
sudo wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo
sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key

安装

1
yum install jenkins

jenkins 配置

默认端口号是8080,直接访问,会进行一些初始化,及插件的安装

1568192488358

jenkins版本: 2.176.3-1.1

  1. 部署节点配置

  • 进入jenkins主界面,点击左侧菜单的 “系统管理->系统设置”,拖动配置项到“Publish on ssh”

  • 新增登录主机的ssh private key/ password的登录信息

    jenkins-1

  • 增加节点,包括登录主机的用户名,IP地址,工作目录等, 可以新增多个节点

    • Name: 主机标识符,用于区分
    • Hostname: 主机ip地址
    • Username: 登录主机的用户名
    • Remote Directory: 远端工作目录

jenkins-host

  1. 部署配置

  • 新建部署项,在主页左边栏,“新建任务“,起个有意义的名称,下面选择”自由风格的软件项目

    找到源码管理,配置Git 的Repo/credit(获取代码的key), 分支;

    如果可以正常获取,不会出现提示

    jenkins-source

    否则会出现如下提示,检查repo的配置是否写错误;检查key是不是有权限拉取代码

jenkins-source-1

  • 找到”构建“ 选项

    • 加入构建的命令,即打包操作(进入到工作目录,直接打包)
    1
    2
    3
    cd $WORKSPACE
    TAR_NAME=wps_eb_`date +%Y-%m-%d`.tar
    tar -cf $TAR_NAME ./
    • 增加构建步骤,选择 ”send files or execute commands over SSH“, 采用ssh发送项目文件到目标机器并执行命令部署

    1567994464334

    • 增加发送到目标机器后的操作命令,依赖于项目文件 (release/project_init.sh)
    1
    2
    3
    4
    5
    6
    7
    8
    9
    file_name=wps_eb_`date +%Y-%m-%d`.tar
    tmp_dir=/tmp/wps_eb_tmp
    log_dir=/data/log/pm2/wps_eb-admin
    echo $file_name
    sudo rm -rf $tmp_dir
    sudo mkdir -p $tmp_dir
    sudo tar -xf /tmp/$file_name -C $tmp_dir ./release/*
    cd $tmp_dir/release
    sudo ./project_init.sh /tmp/$file_name

一些shell脚本

最近在搞部署项目的功能,写了不少shell脚本,总结一下

1. sftp上传支持多条命令,用awk分隔文件路径获取文件名称($NF表示最后一列)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#!/bin/bash                                                                                                                           
function check() {
if [ -z $1 ]; then
echo "please give the filename"
return
fi
if [ ! -f $1 ]; then
echo "file not exists, exit..."
return
fi
echo "ls" > command.txt
echo "cd bj6-c-grb-screen-admin01/史国富" >> command.txt
echo "put $1" >> command.txt
filename=`echo $1 | awk -F/ '{print $NF}'`
echo $filename
echo "ls $filename" >> command.txt
sftp -P 2222 shiguofu@120.92.118.51 < command.txt
echo "Done"
}


check $1

2. 部署项目,未增加测试环境标识(sed修改),手动修改文件

环境变量传入: 由于sudo 会重新初始化环境变量,因此,可以在脚本中传入执行的变量PATH等

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
WALLE_NODE=10.13.88.152
WALLE_USER=root
WALLE_PASSWD=1qaz@WSX
zip_file_name=wps_eb_admin_`date +%Y%m%d%H%M%S`.tar
remote_dest_dir=/tmp
TEST_NODE=10.226.50.22

function copy_to_walle()
{
echo "copy to $WALLE_NODE"
cd ..
echo "tar service files..."
tar -cf /tmp/$zip_file_name ./*
echo "Done. tar file -> /tmp/$zip_file_name"
sshpass -p "$WALLE_PASSWD" scp /tmp/$zip_file_name $WALLE_USER@$WALLE_NODE:$remote_dest_dir
echo "copy to remote Done."
sshpass -p "$WALLE_PASSWD" ssh $WALLE_USER@$WALLE_NODE "
rm -rf /tmp/wps_eb_tmp
mkdir -p /tmp/wps_eb_tmp
tar -xf /tmp/$zip_file_name -C /tmp/wps_eb_tmp
"
}

function deploy()
{
if [ -z $1 ]; then
echo "please give the ip address as the first param"
return
fi
USER=root
echo "deploy to test: $1"
echo "tar service files..."
tar -cf /tmp/$zip_file_name ./*
echo "Done. tar file -> /tmp/$zip_file_name"
if [ ! -z $2 ]; then
USER=$2
fi
if [ ! -z $3 ]; then
sshpass -p "$3" scp /tmp/$zip_file_name $USER@$1:$remote_dest_dir
sshpass -p "$3" ssh $USER@$1 << eeooff
sudo -i
tar -xf $remote_dest_dir/$zip_file_name -C /tmp/ ./release/*;
cd /tmp/release
PATH=$PATH:/usr/local/bin && ./project_init.sh $remote_dest_dir/$zip_file_name
eeooff
else
scp /tmp/$zip_file_name $USER@$1:$remote_dest_dir
echo "Done copy to remote host..."
ssh $USER@$1 << eeooff
sudo -i
tar -xf $remote_dest_dir/$zip_file_name -C /tmp/ ./release/*;
cd /tmp/release
. /etc/profile && ./project_init.sh $remote_dest_dir/$zip_file_name
eeooff
fi
rm /tmp/$zip_file_name
# ./project_init.sh $remote_dest_dir/$zip_file_name
}


function deploy_test()
{
cd ..
sed -i 's/production/development/g' pm2.json
deploy 10.226.50.22 root 123456
sed -i 's/development/production/g' pm2.json
}

echo $1
if [ -z $1 ]; then
deploy_test
elif [ "$1" == "walle" ]; then
copy_to_walle
else
cd ..
deploy $1 $2 $3
fi

3. 初始化服务环境,安装必要的包

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
#!/bin/bash                                                                                                                                                                    
SOFT_DIR=/data/soft

function install_webp(){
cd $SOFT_DIR
wget https://storage.googleapis.com/downloads.webmproject.org/releases/webp/libwebp-1.0.3-linux-x86-64.tar.gz
tar -xf libwebp-1.0.3-linux-x86-64.tar.gz
cd libwebp-1.0.3-linux-x86-64/bin
if [ -f /usr/bin/cwebp ]; then
mv /usr/bin/cwebp /usr/bin/cweb.bak
fi
cp cwebp /usr/bin/
}

function install_node(){
cd $SOFT_DIR
wget https://nodejs.org/dist/v10.13.0/node-v10.13.0-linux-x64.tar.xz
tar xvf node-v10.13.0-linux-x64.tar.xz
echo "export PATH=$PATH:$SOFT_DIR/node-v10.13.0-linux-x64/bin" >> /etc/profile
echo "export NODE_PATH=/data/soft/node-v10.13.0-linux-x64/lib/node_modules" >> /etc/profile
source /etc/profile
npm install pm2 -g
echo "10.13.0.29 wpsgit.xxx.net" >> /etc/hosts
echo "10.13.0.98 cdnshow.xxx.kingsoft.net" >> /etc/hosts
echo "120.92.115.93 suc.xxx.kingsoft.net" >> /etc/hosts
}
# mongodb 下载地址:https://repo.mongodb.org/yum/redhat/7/mongodb-org/3.4/x86_64/RPMS/

if [ ! -d $SOFT_DIR ];then
rm -rf $SOFT_DIR
mkdir -p $SOFT_DIR
fi
which cwebp # 采用which判断是否存在,获取返回值
code=`echo $?`
echo $code
if [ $code != 0 ]; then
install_webp
fi
which npm
code=`echo $?`
if [ $code != 0 ]; then
install_node
fi

4. 服务初始化脚本

服务需要sudo 执行, 增加了source 初始化环境的头

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
#!/bin/bash
source /etc/profile

function init()
{
if [ ! -z $1 ] && [ -d $1 ]; then
final_dir=$1;
else
echo "$1 did not exists... exit"
return
fi
echo "install package..."
cd $1
if [ ! -z $2 ]; then
sed -i "s/production/$2/g" pm2.json
fi
source /etc/profile
npm config set @wps:registry http://120.92.93.69:4873
npm install --no-save --production
npm install -g cnpm --registry=https://registry.npm.taobao.org
cnpm install canvas
echo "install Done."
echo "stop service sxxx-eb.."
sh -c 'source /etc/profile && pm2 delete wps_eb-admin' &>/tmp/pm2.log
echo "stopped"
sleep 1
echo "start service xxx-eb"
sh -c 'source /etc/profile && pm2 start pm2.json' &>/tmp/pm2.log
echo "start Done."
}


function deploy_by_tar()
{
release_dir=/data/www
dest_dir=/data/release/xxx_ebook_admin/`date +%Y%m%d-%H%M%S`
final_xxx_ebook_dir=$release_dir/xxx_ebook_admin
if [ ! -z $1 ] && [ -f $1 ]; then
echo "deploy with $1..."
zip_file_name=$1
else
echo "$1 did not exists... exit"
return
fi
mkdir -p $release_dir
mkdir -p $dest_dir
echo "untar package to $dest_dir"
`tar -xf $zip_file_name -C $dest_dir`
echo "untar Done."
echo "ln -sfn $dest_dir $final_xxx_ebook_dir"
ln -sfn $dest_dir $final_xxx_ebook_dir
echo 'Done.'
init $final_wps_ebook_dir $2
rm $zip_file_name
}

./init_env.sh
deploy_by_tar $1 $2

shell知识点.md

1. shell函数参数传递不需要在参数列表,是通过$1, $2….这样获取,如

1
2
3
4
5
6
7
8
9
10
11
12
#!/bin/bash

function deploy_one_host() # 函数定义,小括号不是必须的
{
cd wps_pic
zip_file_name=wps_pic_`date +%Y%m%d%H%M%S`.zip
zip "$zip_file_name" ./* -q
scp $zip_file_name root@$1:/tmp # 获取第一个参数
ssh root@$1 "sh exec.sh $zip_file_name;" # ssh 在远端执行命令,多个以分号分隔
}

deploy_one_host 10.229.26.143 # 传递一个参数,多个可在后面添加,以空格分隔

2. shell 命令行参数

1
2
3
#!/bin/bash

echo $1, $2 # 获取命令行参数,第一个第二个, 如果没有就为空

执行:

./exec.sh a b

3. if 判断

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
function check()
{
if [ ! -d "$release_name" ]; then
echo "$release_name"" not exists, check the online dir"
return
fi
if [ -z "$1" ]; then
echo "must give the dir"
return
elif [ ! -d "$1" ]; then
echo "$1" "not exists please give the dir will online"
return
else
echo "check ok"
do_online $1
fi
}

-d 判读是不是一个目录,如果是为真

-f 判断是不是一个普通文件,是为真

-z 判断字符的长度是不是为0,为0则为真

==/!= 字符串相等/不等