JAVA连接Kafka及SSL认证

这篇具有很好参考价值的文章主要介绍了JAVA连接Kafka及SSL认证。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

JAVA连接Kafka

1、Maven驱动(注意一定要对应自己的Kafka版本)

		<dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.12</artifactId>
            <version>2.5.0</version>
        </dependency>

        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>2.5.0</version>
        </dependency>

        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-streams</artifactId>
            <version>2.5.0</version>
        </dependency>

2、生产者生产数据

2.1 普通方式创建Producer
package com.tyhh.test;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import java.util.Properties;
import java.util.concurrent.Future;

/**
 * @author: 
 * @version: v1.0
 * @description:
 * @date: 
 **/
public class KafkaProducerTest {

    public static void main(String[] args) {

        Properties props = new Properties();
        String topic = "my-topic";
        //连接地址
        props.put("bootstrap.servers", "node1:9092,node2:9092,node3:9092");
        props.put("acks", "all");//所有副本写入该消息才算成功
        props.put("retries", 0);//retries=MAX 无限尝试
        props.put("batch.size", 16384);//默认批量处理消息字节数
        props.put("linger.ms", 1);//延时1ms发送
        props.put("buffer.memory", 33554432);//缓冲区大小
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");//序列化
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");//序列化
       
        KafkaProducer<String, String> producer = new KafkaProducer<>(props);

        for (int i = 0; i < 10; i++) {
            ProducerRecord<String, String> record = new ProducerRecord<String, String>(topic, "topic_key_" + i, "topic_value_" + i);
            Future<RecordMetadata> metadataFuture = producer.send(record);
            RecordMetadata recordMetadata = null;
            try {
                recordMetadata = metadataFuture.get();
                System.out.println("发送成功!");
                System.out.println("topic:" + recordMetadata.topic());
                System.out.println("partition:" + recordMetadata.partition());
                System.out.println("offset:" + recordMetadata.offset());
            } catch (Exception e) {
                System.out.println("发送失败!");
                e.printStackTrace();
            }
        }
        producer.flush();
        producer.close();
    }

}
2.2 ssl加密和认证创建Producer(Plain)
package com.tyhh.test;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import java.util.Properties;
import java.util.concurrent.Future;

/**
 * @author: 
 * @version: v1.0
 * @description:
 * @date: 
 **/
public class KafkaProducerTest {

    public static void main(String[] args) {

        Properties props = new Properties();

        String user = "admin";
        String password = "admin";
        String topic = "my-topic";

        //连接地址
        props.put("bootstrap.servers", "node1:9092,node2:9092,node3:9092");
        props.put("acks", "all");//所有副本写入该消息才算成功
        props.put("retries", 0);//retries=MAX 无限尝试
        props.put("batch.size", 16384);//默认批量处理消息字节数
        props.put("linger.ms", 1);//延时1ms发送
        props.put("buffer.memory", 33554432);//缓冲区大小
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");//序列化
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");//序列化
		//ssl加密和认证
        properties.put("security.protocol", "SASL_PLAINTEXT");
        properties.put("sasl.mechanism", "PLAIN");
        properties.put("sasl.jaas.config",
                    "org.apache.kafka.common.security.plain.PlainLoginModule "
                            + "required username=\"" + user + "\" password=\"" + password + "\";");

       
        KafkaProducer<String, String> producer = new KafkaProducer<>(props);

        for (int i = 0; i < 10; i++) {
            ProducerRecord<String, String> record = new ProducerRecord<String, String>(topic, "topic_key_" + i, "topic_value_" + i);
            Future<RecordMetadata> metadataFuture = producer.send(record);
            RecordMetadata recordMetadata = null;
            try {
                recordMetadata = metadataFuture.get();
                System.out.println("发送成功!");
                System.out.println("topic:" + recordMetadata.topic());
                System.out.println("partition:" + recordMetadata.partition());
                System.out.println("offset:" + recordMetadata.offset());
            } catch (Exception e) {
                System.out.println("发送失败!");
                e.printStackTrace();
            }
        }
        producer.flush();
        producer.close();
    }

}
2.3 ssl加密和认证创建Producer(Plain使用配置文件方式)

kafka_client_jaas_plain配置文件信息:

KafkaClient {
	org.apache.kafka.common.security.plain.PlainLoginModule required
	username="admin"
	password="admin";
};

具体代码实现:

package com.tyhh.test;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import java.util.Properties;
import java.util.concurrent.Future;

/**
 * @author: 
 * @version: v1.0
 * @description:
 * @date: 
 **/
public class KafkaProducerTest {

    public static void main(String[] args) {

		//使用配置文件方式进行ssl认证
        System.setProperty("java.security.auth.login.config","./kafka_ssl_conf/kafka_client_jaas_plain.conf");

        Properties props = new Properties();

        String topic = "my-topic";

        //连接地址
        props.put("bootstrap.servers", "node1:9092,node2:9092,node3:9092");
        props.put("acks", "all");//所有副本写入该消息才算成功
        props.put("retries", 0);//retries=MAX 无限尝试
        props.put("batch.size", 16384);//默认批量处理消息字节数
        props.put("linger.ms", 1);//延时1ms发送
        props.put("buffer.memory", 33554432);//缓冲区大小
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");//序列化
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");//序列化
	
        KafkaProducer<String, String> producer = new KafkaProducer<>(props);

        for (int i = 0; i < 10; i++) {
            ProducerRecord<String, String> record = new ProducerRecord<String, String>(topic, "topic_key_" + i, "topic_value_" + i);
            Future<RecordMetadata> metadataFuture = producer.send(record);
            RecordMetadata recordMetadata = null;
            try {
                recordMetadata = metadataFuture.get();
                System.out.println("发送成功!");
                System.out.println("topic:" + recordMetadata.topic());
                System.out.println("partition:" + recordMetadata.partition());
                System.out.println("offset:" + recordMetadata.offset());
            } catch (Exception e) {
                System.out.println("发送失败!");
                e.printStackTrace();
            }
        }
        producer.flush();
        producer.close();
    }

}
2.4 ssl加密和认证创建Producer(Scram)
package com.tyhh.test;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import java.util.Properties;
import java.util.concurrent.Future;

/**
 * @author: 
 * @version: v1.0
 * @description:
 * @date: 
 **/
public class KafkaProducerTest {

    public static void main(String[] args) {

        Properties props = new Properties();

        String user = "admin";
        String password = "admin";
        String topic = "my-topic";

        //连接地址
        props.put("bootstrap.servers", "node1:9092,node2:9092,node3:9092");
        props.put("acks", "all");//所有副本写入该消息才算成功
        props.put("retries", 0);//retries=MAX 无限尝试
        props.put("batch.size", 16384);//默认批量处理消息字节数
        props.put("linger.ms", 1);//延时1ms发送
        props.put("buffer.memory", 33554432);//缓冲区大小
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");//序列化
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");//序列化
		//ssl加密和认证
        props.put("security.protocol", "SASL_PLAINTEXT");
        props.put("sasl.mechanism", "SCRAM-SHA-512");
        props.put("sasl.jaas.config",
                "org.apache.kafka.common.security.scram.ScramLoginModule "
                        + "required username=\"" + user + "\" password=\"" + password + "\";");
       
        KafkaProducer<String, String> producer = new KafkaProducer<>(props);

        for (int i = 0; i < 10; i++) {
            ProducerRecord<String, String> record = new ProducerRecord<String, String>(topic, "topic_key_" + i, "topic_value_" + i);
            Future<RecordMetadata> metadataFuture = producer.send(record);
            RecordMetadata recordMetadata = null;
            try {
                recordMetadata = metadataFuture.get();
                System.out.println("发送成功!");
                System.out.println("topic:" + recordMetadata.topic());
                System.out.println("partition:" + recordMetadata.partition());
                System.out.println("offset:" + recordMetadata.offset());
            } catch (Exception e) {
                System.out.println("发送失败!");
                e.printStackTrace();
            }
        }
        producer.flush();
        producer.close();
    }

}
2.5 ssl加密和认证创建Producer(Scram使用配置文件方式)

kafka_client_jaas_scram配置文件信息:

KafkaClient {
	org.apache.kafka.common.security.scram.ScramLoginModule required
	username="admin"
	password="admin";
};

具体代码实现:文章来源地址https://www.toymoban.com/news/detail-526020.html

package com.tyhh.test;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import java.util.Properties;
import java.util.concurrent.Future;

/**
 * @author: 
 * @version: v1.0
 * @description:
 * @date: 
 **/
public class KafkaProducerTest {

    public static void main(String[] args) {

		//使用配置文件方式进行ssl认证
        System.setProperty("java.security.auth.login.config","./kafka_ssl_conf/kafka_client_jaas_scram.conf");

        Properties props = new Properties();

        String topic = "my-topic";

        //连接地址
        props.put("bootstrap.servers", "node1:9092,node2:9092,node3:9092");
        props.put("acks", "all");//所有副本写入该消息才算成功
        props.put("retries", 0);//retries=MAX 无限尝试
        props.put("batch.size", 16384);//默认批量处理消息字节数
        props.put("linger.ms", 1);//延时1ms发送
        props.put("buffer.memory", 33554432);//缓冲区大小
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");//序列化
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");//序列化
	
        KafkaProducer<String, String> producer = new KafkaProducer<>(props);

        for (int i = 0; i < 10; i++) {
            ProducerRecord<String, String> record = new ProducerRecord<String, String>(topic, "topic_key_" + i, "topic_value_" + i);
            Future<RecordMetadata> metadataFuture = producer.send(record);
            RecordMetadata recordMetadata = null;
            try {
                recordMetadata = metadataFuture.get();
                System.out.println("发送成功!");
                System.out.println("topic:" + recordMetadata.topic());
                System.out.println("partition:" + recordMetadata.partition());
                System.out.println("offset:" + recordMetadata.offset());
            } catch (Exception e) {
                System.out.println("发送失败!");
                e.printStackTrace();
            }
        }
        producer.flush();
        producer.close();
    }

}

3、消费者消费数据

3.1 普通方式创建消费者
package com.tyhh.test;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import java.util.Collections;
import java.util.Properties;

/**
 * @author: 
 * @version: v1.0
 * @description:
 * @date: 
 **/
public class KafkaConsumerTest {

    public static void main(String[] args) {


        String topic = "my-topic";
        String groupId = "my-group";
        String autoCommit = "true";
        String offsetReset = "earliest";

        Properties props = new Properties();
        props.put("bootstrap.servers", "node1:9092,node2:9092,node3:9092");
        props.put("group.id", groupId);
        //是否自动提交偏移量
        props.put("enable.auto.commit", autoCommit);
        props.put("auto.offset.reset", offsetReset);
        props.put("auto.commit.interval.ms", "1000");
        props.put("session.timeout.ms", "30000");
        //序列化方式
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);

        consumer.subscribe(Collections.singletonList(topic));
        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(1000L);
            for (ConsumerRecord<String, String> record : records) {
                System.out.printf("partition= %d, offset = %d, key = %s, value = %s\n", record.partition(),
                        record.offset(), record.key(), record.value());
            }
        }
    }

}
3.2 ssl加密和认证创建Consumer(Plain)
package com.tyhh.test;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import java.util.Collections;
import java.util.Properties;

/**
 * @author: 
 * @version: v1.0
 * @description:
 * @date: 
 **/
public class KafkaConsumerTest {

    public static void main(String[] args) {

        String user = "admin";
        String password = "admin";
        String topic = "my-topic";
        String groupId = "my-group";
        String autoCommit = "true";
        String offsetReset = "earliest";

        Properties props = new Properties();
        props.put("bootstrap.servers", "node1:9092,node2:9092,node3:9092");
        props.put("group.id", groupId);
        //是否自动提交偏移量
        props.put("enable.auto.commit", autoCommit);
        props.put("auto.offset.reset", offsetReset);
        props.put("auto.commit.interval.ms", "1000");
        props.put("session.timeout.ms", "30000");
        //序列化方式
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        //ssl加密和认证
        properties.put("security.protocol", "SASL_PLAINTEXT");
        properties.put("sasl.mechanism", "PLAIN");
        properties.put("sasl.jaas.config",
                    "org.apache.kafka.common.security.plain.PlainLoginModule "
                            + "required username=\"" + user + "\" password=\"" + password + "\";");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);

        consumer.subscribe(Collections.singletonList(topic));
        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(1000L);
            for (ConsumerRecord<String, String> record : records) {
                System.out.printf("partition= %d, offset = %d, key = %s, value = %s\n", record.partition(),
                        record.offset(), record.key(), record.value());
            }
        }
    }

}
3.3 ssl加密和认证创建Producer(Plain使用配置文件方式)

kafka_client_jaas_plain配置文件信息:

KafkaClient {
	org.apache.kafka.common.security.plain.PlainLoginModule required
	username="admin"
	password="admin";
};

具体代码实现:

package com.tyhh.test;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import java.util.Collections;
import java.util.Properties;

/**
 * @author: 
 * @version: v1.0
 * @description:
 * @date: 
 **/
public class KafkaConsumerTest {

    public static void main(String[] args) {

		//使用配置文件方式进行ssl认证
        System.setProperty("java.security.auth.login.config","./kafka_ssl_conf/kafka_client_jaas_plain.conf");
	   
        String topic = "my-topic";
        String groupId = "my-group";
        String autoCommit = "true";
        String offsetReset = "earliest";

        Properties props = new Properties();
        props.put("bootstrap.servers", "node1:9092,node2:9092,node3:9092");
        props.put("group.id", groupId);
        //是否自动提交偏移量
        props.put("enable.auto.commit", autoCommit);
        props.put("auto.offset.reset", offsetReset);
        props.put("auto.commit.interval.ms", "1000");
        props.put("session.timeout.ms", "30000");
        //序列化方式
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
       
        KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);

        consumer.subscribe(Collections.singletonList(topic));
        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(1000L);
            for (ConsumerRecord<String, String> record : records) {
                System.out.printf("partition= %d, offset = %d, key = %s, value = %s\n", record.partition(),
                        record.offset(), record.key(), record.value());
            }
        }
    }

}
3.4 ssl加密和认证创建Producer(Scram)
package com.tyhh.test;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import java.util.Collections;
import java.util.Properties;

/**
 * @author: 
 * @version: v1.0
 * @description:
 * @date: 
 **/
public class KafkaConsumerTest {

    public static void main(String[] args) {

        String user = "admin";
        String password = "admin";
        String topic = "my-topic";
        String groupId = "my-group";
        String autoCommit = "true";
        String offsetReset = "earliest";

        Properties props = new Properties();
        props.put("bootstrap.servers", "node1:9092,node2:9092,node3:9092");
        props.put("group.id", groupId);
        //是否自动提交偏移量
        props.put("enable.auto.commit", autoCommit);
        props.put("auto.offset.reset", offsetReset);
        props.put("auto.commit.interval.ms", "1000");
        props.put("session.timeout.ms", "30000");
        //序列化方式
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        //ssl加密和认证
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("security.protocol", "SASL_PLAINTEXT");
        props.put("sasl.mechanism", "SCRAM-SHA-512");
        props.put("sasl.jaas.config",
                "org.apache.kafka.common.security.scram.ScramLoginModule required username='"+user+"' password='"+password+"';");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);

        consumer.subscribe(Collections.singletonList(topic));
        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(1000L);
            for (ConsumerRecord<String, String> record : records) {
                System.out.printf("partition= %d, offset = %d, key = %s, value = %s\n", record.partition(),
                        record.offset(), record.key(), record.value());
            }
        }
    }

}
3.5 ssl加密和认证创建Producer(Scram使用配置文件方式)

kafka_client_jaas_scram配置文件信息:

KafkaClient {
	org.apache.kafka.common.security.scram.ScramLoginModule required
	username="admin"
	password="admin";
};

具体代码实现:

package com.tyhh.test;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import java.util.Collections;
import java.util.Properties;

/**
 * @author: 
 * @version: v1.0
 * @description:
 * @date: 
 **/
public class KafkaConsumerTest {

    public static void main(String[] args) {

		//使用配置文件方式进行ssl认证
        System.setProperty("java.security.auth.login.config","./kafka_ssl_conf/kafka_client_jaas_scram.conf");
	   
        String topic = "my-topic";
        String groupId = "my-group";
        String autoCommit = "true";
        String offsetReset = "earliest";

        Properties props = new Properties();
        props.put("bootstrap.servers", "node1:9092,node2:9092,node3:9092");
        props.put("group.id", groupId);
        //是否自动提交偏移量
        props.put("enable.auto.commit", autoCommit);
        props.put("auto.offset.reset", offsetReset);
        props.put("auto.commit.interval.ms", "1000");
        props.put("session.timeout.ms", "30000");
        //序列化方式
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
       
        KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);

        consumer.subscribe(Collections.singletonList(topic));
        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(1000L);
            for (ConsumerRecord<String, String> record : records) {
                System.out.printf("partition= %d, offset = %d, key = %s, value = %s\n", record.partition(),
                        record.offset(), record.key(), record.value());
            }
        }
    }

}

到了这里,关于JAVA连接Kafka及SSL认证的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • Kafka3.4 SASL/kerberos/ACL 证以及 SSL 加密连接

    前面我们使用 kafka3.3.1 on zookeeper 的模式进行多网段监听的 kafka 集群,顺便搭建起 kafkaui 后发现一些问题,我们 kafka 集群没有连接认证,万一谁知道了我们的 kafka 连接地址,岂不是随随便便就能消费数据、清空数据、胡乱修改数据了吗? 所以本章节进行认证连接的搭建,参

    2024年02月14日
    浏览(39)
  • Kafka增加安全验证安全认证,SASL认证,并通过spring boot-Java客户端连接配置

    公司Kafka一直没做安全验证,由于是诱捕程序故需要面向外网连接,需要增加Kafka连接验证,保证Kafka不被非法连接,故开始研究Kafka安全验证 使用Kafka版本为2.4.0版本,主要参考官方文档 官网对2.4版本安全验证介绍以及使用方式地址: https://kafka.apache.org/24/documentation.html#secu

    2024年02月01日
    浏览(63)
  • Java调用HTTPS接口,绕过SSL认证

    网络编程中,HTTPS(Hypertext Transfer Protocol Secure)是一种通过加密的方式在计算机网络上进行安全通信的协议。网络传输协议,跟http相比更安全,因为他加上了SSL/TLS协议来加密通信内容。 Java调用HTTPS,需要与客户端建立连接,但是建立连接的时候,需要进行SSL认证。有的时候

    2024年02月06日
    浏览(43)
  • 【ssl认证、证书】java中的ssl语法API说明(SSLContext)、与keytool 工具的联系

    相关文章: //-----------Java SSL begin---------------------- 【ssl认证、证书】SSL双向认证和SSL单向认证的区别(示意图) 【ssl认证、证书】java中的ssl语法API说明(SSLContext)、与keytool 工具的联系 【ssl认证、证书】SSL双向认证java实战、keytool创建证书 【ssl认证、证书】Wireshark抓包分析 【s

    2024年02月10日
    浏览(44)
  • JAVA使用RestTemplate类实现SSL双向/单向认证(国际)

    以管理员身份打开Windows PowerShel,通过cd(与linux系统类似)命令进入到JDK的bin目录:如C:Program FilesJavajdk1.8.0_221jrebin,找到目录下有keytool.exe就是正确进入目录了 参数说明: genkey 表示要创建一个新的密钥 alias 表示 keystore 的别名、 keyalg 表示使用的加密算法是 RSA ,一种非

    2024年02月15日
    浏览(40)
  • java 远程调用 httpclient 调用https接口 忽略SSL认证

    httpclient 调用https接口,为了避免需要证书,所以用一个类继承DefaultHttpClient类,忽略校验过程。下面是忽略校验过程的代码类:SSLClient  然后再调用的远程get、post请求中使用SSLClient 创建Httpclient ,代码如下:

    2024年02月11日
    浏览(41)
  • Java get/post的https请求忽略ssl证书认证

    unable to find valid certification path to requested target 工具类 使用方法

    2024年02月11日
    浏览(46)
  • 用java开发MQTT(SSL连接)

    近期又接触到了新的一个东西MQTT,用本地环境模拟一下吧。 主要是用EMQ作为服务器,首先当然是去官网下载一个EMQ  下载 EMQX 我本地用的是windows版本,下载完后进去bin目录后有个emqx文件 用命令窗口输入emqx start 就启动了 这里主要是用mysql来鉴权,设置一下就好了,当然建

    2024年02月07日
    浏览(42)
  • 双向SSL认证证书 生成 jks 步骤, java用jks 发送http请求 方法

    ) 1.证书的 cert.pem 文件 2.key文件 3.key的密钥 这里只显示 liunx 命令 ,windows 的同学可自查 这个命令会提示输入3次密码 ,第一次输入xxx.key的密码 , 第二次提示输入导出密码 自己设就行 ,这里用 changeit 注意: 第一次 不是自己设的 需要用 key文件的密钥 这里会用 上一步骤的导

    2024年02月06日
    浏览(42)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包