背景
在微服务架构中,Zookeeper扮演着非常重要的角色,很多服务的部署、运行依赖于它。 因此,在单元测试/集成过程测试中,不可避免会遇到Zookeeper。和使用H2来作为Oracle/Mysql数据库的in-memory替身一样,我们也希望能寻找到可以运行在内存中的Zookeeper替身。
Curator-test
Curator是Netflix公司开源的一个Zookeeper客户端,目前已经捐献给给了Apache 基金会。项目网址是: http://curator.apache.org/。
与Zookeeper提供的原生客户端相比,Curator的抽象层次更高,简化了Zookeeper客户端的开发量。
Curator提供了丰富的功能模块,读者可以访问上述项目网址了解到。作为测试人员,我们关注的是,Curator提供了Curator-Test模块,这其中就包含了可以运行在内存中的Zoolkeeper 替身。
curator-test
Contains the TestingServer, the TestingCluster and a few other tools useful for testing.
使用样例
在实际的测试场景中,经常使用到的是一个小型的Zookeeper集群。因此我们需要使用Curator-Test所提供的TestingCluster 。
@Andy2019提供了一个这样的案例
https://blog.csdn.net/Andy2019/article/details/73379978
Curator-Test as a in-memory service
在测试过程中,我们希望Zookeeper能和数据库一样在测试用例执行前被启动,然后在所有执行用例完成后被关闭。 因此,将上述案例稍加改造,
package org.jacoco.examples.maven.java;
import static org.junit.Assert.*;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import org.apache.curator.test.TestingServer;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;
import org.apache.curator.test.InstanceSpec;
import org.apache.curator.test.TestingCluster;
import org.apache.curator.test.TestingZooKeeperServer;
/**
*
* @ClassName: TestingCluster_Sample
* @Description: TODO
* @author RongShu
* @date
*
*/
public class TestingCluster_Sample {
TestingCluster cluster ;
@Before
public void setup() throws Exception {
List<InstanceSpec> specs = new ArrayList<>();
List<Integer> ports = new ArrayList<>();
int port = 30155, electionPort = 31155, quorumPort = 32155;
for (int i = 0; i < 3; i++) {
InstanceSpec spec = new InstanceSpec(null, port, electionPort, quorumPort,
true, i, 10000, 100, null, "127.0.0.1");
//logger.info("Zookeeper-{} : port : {}, election port : {}, quorum port : {}", i, port, electionPort, quorumPort);
specs.add(spec);
ports.add(port);
port++;
electionPort++;
quorumPort++;
}
//logger.info("Connect string ports : {}", ports);
//TestingCluster cluster = new TestingCluster(specs);
cluster = new TestingCluster(specs);
try {
cluster.start();
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
try
{
//logger.debug("Stopping cluster");
cluster.stop(); }
catch (IOException e)
{
//logger.error("Error when stopping cluster", e);
}
}));
} catch (Exception e)
{
//logger.error("Error when starting zookeeper cluster", e);
throw e;
}
//logger.info("Cluster started with ports : {}", ports);
}
@Test
public void TestCluster() throws Exception {
// TestingCluster cluster = new TestingCluster(3);
//
// cluster.start();
Thread.sleep(2000);
TestingZooKeeperServer leader = null;
for (TestingZooKeeperServer zs : cluster.getServers()) {
System.out.print(zs.getInstanceSpec().getServerId() + "-");
System.out.print(zs.getQuorumPeer().getServerState() + "-");
//System.out.println(zs.getInstanceSpec().getDataDirectory().getAbsolutePath());
System.out.println(zs.getInstanceSpec().getConnectString());
if (zs.getQuorumPeer().getServerState().equals("leading")) {
leader = zs;
}
}
leader.kill();
Thread.sleep(2000);
System.out.println("--After leader kill:");
for (TestingZooKeeperServer zs : cluster.getServers()) {
System.out.print(zs.getInstanceSpec().getServerId() + "-");
System.out.print(zs.getQuorumPeer().getServerState() + "-");
//System.out.println(zs.getInstanceSpec().getDataDirectory().getAbsolutePath());
System.out.println(zs.getInstanceSpec().getConnectString());
}
}
@After
public void teardown() {
cluster.stop();
}
}