sharding-jdbc介绍
本文基于4.X版本开发
- 官方网站
定位为轻量级Java框架,在Java的JDBC层提供的额外服务。 它使用客户端直连数据库,以jar包形式提供服务,无需额外部署和依赖,可理解为增强版的JDBC驱动,完全兼容JDBC和各种ORM框架。
适用于任何基于JDBC的ORM框架,如:JPA, Hibernate, Mybatis, Spring JDBC Template或直接使用JDBC。
支持任何第三方的数据库连接池,如:DBCP, C3P0, BoneCP, Druid, HikariCP等。
支持任意实现JDBC规范的数据库。目前支持MySQL,Oracle,SQLServer,PostgreSQL以及任何遵循SQL92标准的数据库。
sharding-jdbc 存在部分不支持项, 如果函数, 存储过程等功能, 详情见JDBC不支持项
分表思路
- 该项目为某信集团项目, 订单数据量较大.主要是文件盖章
- 由于该项目近期已正式上线几个地市, 观察一月后发现大概有70万数据, 后续还有几个地市需要陆续上线, 后续预估每月订单数大概500万左右.
- 经考虑, 按照年季度来分表, 一张表大概1500W数据, 在电信teldb数据库承受范围内
需要分表的表有:kg_order_info(订单数据表)与kg_upload_file(文件表)
-
kg_order_info
- 根据businessDate字段来根据时间进行动态分表
- 按时businessDate来判断对应的年份及季度, 按照kg_order_info_年_季度分表
- 例如 businessDate 为 2023-06-12 14:00:00 时, 分表策略查询的表为kg_order_info_2023_2这张表
- 历史数据处理, 如果businessDate时间早于2023年, 则统一插入或查询kg_order_info_2023_1这张表
- 所有查询条件都需要带上businessDate这个参数, 不论是范围查询还是精确查询
- 目前分表时间暂定为2023年至2050年
- 表主键ID通过雪花算法生成, 便于后续根据id推算出日期, 由于5月11号开始上线, 早期数据统一放入kg_order_info_2023_2这张表
- 数据库编写存储过程, 每三个月动态创建定时创建下一年当前季度对应表结构
-
kg_upload_file
- 根据雪花算法主键id来进行动态分表
- 根据id反推出生成id的时间, 根据时间来判断对应的年份及季度, 按照kg_upload_file_年_季度分表
- 当id为早期数据时, 反推时间失败的情况下, 统一查询kg_upload_file_2023_1这张表
- 该表只会有根据id精准查询情况
- 目前分表时间暂定为2023年至2050年
- 数据库编写存储过程, 每三个月动态创建定时创建下一年当前季度对应表结构
查询尽量需带分配建, 防止查询全表操作
依赖添加
<sharding-jdbc.version>4.1.1</sharding-jdbc.version>
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>sharding-jdbc-core</artifactId>
<version>${sharding-jdbc.version}</version>
</dependency>
代码集成
common模块
数据源增加分库分表枚举
public enum DataSourceType {
/** 主库 */
MASTER,
/** 从库 */
SLAVE,
/** 分库分表 */
SHARDING
}
新增雪花算法id工具类
import java.util.Date;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
/** @Classname IdWorker @Description @Date 2023-5-31 10:51 @Author 无朽 */
public class IdWorker {
// 开始时间截 (2015-01-01)
private static final long twepoch = 1489111610226L;
// 机器ID所占位置
private final long workerIdBits = 5L;
// 数据标识所占位数
private final long dataCenterIdBits = 5L;
// 支持的最大机器id,结果是31 (这个移位算法可以很快的计算出几位二进制数所能表示的最大十进制数)
private final long maxWorkerId = -1L ^ (-1L << workerIdBits);
// 支持的最大数据标识id,结果是31
private final long maxdataCenterId = -1L ^ (-1L << dataCenterIdBits);
// 序列在id中占的位数
private final long sequenceBits = 12L;
// 机器ID向左移12位
private final long workerIdShift = sequenceBits;
// 数据标识id向左移17位(12+5)
private final long dataCenterIdShift = sequenceBits + workerIdBits;
// 时间截向左移22位(5+5+12)
private final long timestampLeftShift = sequenceBits + workerIdBits + dataCenterIdBits;
// 生成序列的掩码,这里为4095 (0b111111111111=0xfff=4095)
private final long sequenceMask = -1L ^ (-1L << sequenceBits);
// 工作机器ID(0~31)
private long workerId;
// 数据中心ID(0~31)
private long dataCenterId;
// 毫秒内序列(0~4095)
private long sequence = 0L;
// 上次生成ID的时间截
private long lastTimestamp = -1L;
/**
* @param workerId 工作机器ID(0~31)
* @param dataCenterId 数据中心ID(0~31)
*/
public IdWorker(long workerId, long dataCenterId) {
if (workerId > maxWorkerId || workerId < 0) {
throw new IllegalArgumentException(
String.format(
"worker Id can't be greater than %d or less than 0", maxWorkerId));
}
if (dataCenterId > maxdataCenterId || dataCenterId < 0) {
throw new IllegalArgumentException(
String.format(
"datacenter Id can't be greater than %d or less than 0",
maxdataCenterId));
}
this.workerId = workerId;
this.dataCenterId = dataCenterId;
}
/**
* 获得下一个ID (该方法是线程安全的)
*
* @return
*/
public synchronized long nextId() {
long timestamp = timeGen();
// 如果当前时间小于上一次ID生成的时间戳,说明系统时钟回退过这个时候应当抛出异常
if (timestamp < lastTimestamp) {
throw new RuntimeException(
String.format(
"Clock moved backwards. Refusing to generate id for %d milliseconds",
lastTimestamp - timestamp));
}
// 如果是同一时间生成的,则进行毫秒内序列
if (lastTimestamp == timestamp) {
sequence = (sequence + 1) & sequenceMask;
// 毫秒内序列溢出
if (sequence == 0) {
// 阻塞到下一个毫秒,获得新的时间戳
timestamp = tilNextMillis(lastTimestamp);
}
} else { // 时间戳改变,毫秒内序列重置
sequence = 0L;
}
// 上次生成ID的时间截
lastTimestamp = timestamp;
// 移位并通过或运算拼到一起组成64位的ID
return ((timestamp - twepoch) << timestampLeftShift)
| (dataCenterId << dataCenterIdShift)
| (workerId << workerIdShift)
| sequence;
}
/**
* 阻塞到下一个毫秒,直到获得新的时间戳
*
* @param lastTimestamp
* @return
*/
protected long tilNextMillis(long lastTimestamp) {
long timestamp = timeGen();
while (timestamp <= lastTimestamp) {
timestamp = timeGen();
}
return timestamp;
}
/**
* 返回以毫秒为单位的当前时间
*
* @return
*/
protected long timeGen() {
return System.currentTimeMillis();
}
/**
* 根据雪花id返回以毫秒为单位的当前时间
*
* @return
*/
public static Date inverseDate(Long id) {
return new Date((id >> 22) + twepoch);
}
// 单例对象
private static volatile IdWorker idWorker = null;
// 获取单例对象
public static IdWorker getIdWorker(long workerId, long dataCenterId) {
if (null == idWorker) {
synchronized (IdWorker.class) {
if (null == idWorker) {
idWorker = new IdWorker(workerId, dataCenterId);
}
}
}
return idWorker;
}
public static void main(String[] args) {
IdWorker idWorker = getIdWorker(0, 0);
IdWorker idWorker2 = getIdWorker(0, 0);
Map<Long, Integer> map = new ConcurrentHashMap<>();
for (int i = 0; i < 10000; i++) {
new Thread(
new Runnable() {
@Override
public void run() {
System.out.println(idWorker.nextId());
/* Long id = getIdWorker(0,0).nextId();
if (map.get(id)==null){
map.put(id,1);
}else{
map.put(id,map.get(id)+1);
}*/
}
})
.start();
new Thread(
new Runnable() {
@Override
public void run() {
System.out.println(idWorker2.nextId());
/* Long id = getIdWorker(0,0).nextId();
if (map.get(id)==null){
map.put(id,1);
}else{
map.put(id,map.get(id)+1);
}*/
}
})
.start();
}
Date date = inverseDate(540068672161447936L);
System.out.println(date.toString());
for (Map.Entry<Long, Integer> entry : map.entrySet()) {
if (entry.getValue() > 1) {
System.out.println(entry.getKey());
}
}
// 雪花算法反推
System.out.println("雪花算法反推===");
System.out.println(new Date());
// System.out.println(new Date((idWorker.nextId() >> 22) + twepoch));
long a = IdWorker.getIdWorker(0, 0).nextId();
System.out.println(a);
System.out.println((idWorker.nextId() >> 22) & ~(-1L << 12)); // sequence
System.out.println((idWorker.nextId() >> (12)) & ~(-1L << (10))); // dwId
System.out.println((idWorker.nextId() >> 12) & ~(-1L << 5)); // workerId
System.out.println((idWorker.nextId() >> 17) & ~(-1L << 5)); // dataCenterId
System.out.println((idWorker.nextId() >> 22) + twepoch); // timestamp
System.out.println((a >> 22) + twepoch); // timestamp
System.out.println((a >> 17) & ~(-1L << 5)); // dataCenterId
System.out.println(a);
System.out.println((580603288290852864L >> 22) + twepoch); // timestamp
System.out.println((580603289276514304L >> 22) + twepoch); // timestamp
}
}
DateUtils 工具类新增部分方法用户后续分片使用
import org.apache.commons.lang3.time.DateFormatUtils;
import java.lang.management.ManagementFactory;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.time.LocalDate;
import java.time.ZoneId;
import java.util.ArrayList;
import java.util.Calendar;
import java.util.Date;
import java.util.List;
import java.util.concurrent.TimeUnit;
/**
* 时间工具类
*
* @author Yzx
*/
public class DateUtils extends org.apache.commons.lang3.time.DateUtils {
public static String YYYY = "yyyy";
public static String YYYY_MM = "yyyy-MM";
public static String YYYY_MM_DD = "yyyy-MM-dd";
public static String YYYYMMDDHHMMSS = "yyyyMMddHHmmss";
public static String YYYY_MM_DD_HH_MM_SS = "yyyy-MM-dd HH:mm:ss";
private static String[] parsePatterns = {
"yyyy-MM-dd", "yyyy-MM-dd HH:mm:ss", "yyyy-MM-dd HH:mm", "yyyy-MM",
"yyyy/MM/dd", "yyyy/MM/dd HH:mm:ss", "yyyy/MM/dd HH:mm", "yyyy/MM",
"yyyy.MM.dd", "yyyy.MM.dd HH:mm:ss", "yyyy.MM.dd HH:mm", "yyyy.MM"
};
/**
* 获取当前Date型日期
*
* @return Date() 当前日期
*/
public static Date getNowDate() {
return new Date();
}
/**
* 获取当前日期, 默认格式为yyyy-MM-dd
*
* @return String
*/
public static String getDate() {
return dateTimeNow(YYYY_MM_DD);
}
public static final String getTime() {
return dateTimeNow(YYYY_MM_DD_HH_MM_SS);
}
public static final String dateTimeNow() {
return dateTimeNow(YYYYMMDDHHMMSS);
}
public static final String dateTimeNow(final String format) {
return parseDateToStr(format, new Date());
}
public static final String dateTime(final Date date) {
return parseDateToStr(YYYY_MM_DD, date);
}
public static final String parseDateToStr(final String format, final Date date) {
return new SimpleDateFormat(format).format(date);
}
public static final Date dateTime(final String format, final String ts) {
try {
return new SimpleDateFormat(format).parse(ts);
} catch (ParseException e) {
throw new RuntimeException(e);
}
}
/** 日期路径 即年/月/日 如2018/08/08 */
public static final String datePath() {
Date now = new Date();
return DateFormatUtils.format(now, "yyyy/MM/dd");
}
/** 日期路径 即年/月/日 如20180808 */
public static final String dateTime() {
Date now = new Date();
return DateFormatUtils.format(now, "yyyyMMdd");
}
/**
* 获取时间
*
* @return
*/
public static final String getSystemDate() {
Calendar cal = Calendar.getInstance();
SimpleDateFormat formatter = new SimpleDateFormat("yyyyMMddHHmmss");
String mDateTime = formatter.format(cal.getTime());
return mDateTime;
}
/**
* 获取时间(年月)
*
* @return
*/
public static final String getSystemDateYM() {
Calendar cal = Calendar.getInstance();
SimpleDateFormat formatter = new SimpleDateFormat("yyyyMM");
String mDateTime = formatter.format(cal.getTime());
return mDateTime;
}
/**
* 获取时间(日)
*
* @return
*/
public static String getSystemDateDay() {
Calendar cal = Calendar.getInstance();
SimpleDateFormat formatter = new SimpleDateFormat("dd");
String mDateTime = formatter.format(cal.getTime());
return mDateTime;
}
/**
* 获取时间(小时)
*
* @return
*/
public static final String getSystemDateHour() {
Calendar cal = Calendar.getInstance();
SimpleDateFormat formatter = new SimpleDateFormat("HH");
String mDateTime = formatter.format(cal.getTime());
return mDateTime;
}
/**
* 获取时间(分钟)
*
* @return
*/
public static final String getSystemDateMin() {
Calendar cal = Calendar.getInstance();
SimpleDateFormat formatter = new SimpleDateFormat("mm");
String mDateTime = formatter.format(cal.getTime());
mDateTime = mDateTime.substring(0, 1);
int end = Integer.parseInt(mDateTime);
mDateTime = mDateTime + "0-" + end + "9";
return mDateTime;
}
/** 日期型字符串转化为日期 格式 */
public static Date parseDate(Object str) {
if (str == null) {
return null;
}
try {
return parseDate(str.toString(), parsePatterns);
} catch (ParseException e) {
return null;
}
}
/** 获取服务器启动时间 */
public static Date getServerStartDate() {
long time = ManagementFactory.getRuntimeMXBean().getStartTime();
return new Date(time);
}
/** 计算两个时间差 */
public static String getDatePoor(Date endDate, Date nowDate) {
long nd = 1000 * 24 * 60 * 60;
long nh = 1000 * 60 * 60;
long nm = 1000 * 60;
// long ns = 1000;
// 获得两个时间的毫秒时间差异
long diff = endDate.getTime() - nowDate.getTime();
// 计算差多少天
long day = diff / nd;
// 计算差多少小时
long hour = diff % nd / nh;
// 计算差多少分钟
long min = diff % nd % nh / nm;
// 计算差多少秒//输出结果
// long sec = diff % nd % nh % nm / ns;
return day + "天" + hour + "小时" + min + "分钟";
}
/**
* 计算时间差天数
*
* @param endDate
* @param nowDate
* @return
*/
public static long getDatePoorDay(Date endDate, Date nowDate) {
long nd = 1000 * 24 * 60 * 60;
// long ns = 1000;
// 获得两个时间的毫秒时间差异
long diff = endDate.getTime() - nowDate.getTime();
// 计算差多少天
return diff / nd;
}
/**
* 根据时间和单位转换为秒
*
* @param timeout
* @param timeUnit
* @return
*/
public static Long convertToSeconds(long timeout, TimeUnit timeUnit) {
switch (timeUnit) {
case MILLISECONDS:
return TimeUnit.SECONDS.convert(timeout, TimeUnit.MILLISECONDS);
case NANOSECONDS:
return TimeUnit.SECONDS.convert(timeout, TimeUnit.NANOSECONDS);
case SECONDS:
return timeout;
case MINUTES:
return TimeUnit.SECONDS.convert(timeout, TimeUnit.MINUTES);
case MICROSECONDS:
return TimeUnit.SECONDS.convert(timeout, TimeUnit.MICROSECONDS);
case HOURS:
return TimeUnit.SECONDS.convert(timeout, TimeUnit.HOURS);
case DAYS:
return TimeUnit.SECONDS.convert(timeout, TimeUnit.DAYS);
default:
return 0L;
}
}
/**
* 获取日期所在年份
*
* @param date
* @return
*/
public static int getYear(Date date) {
LocalDate localDate = date.toInstant().atZone(ZoneId.systemDefault()).toLocalDate();
return localDate.getYear();
}
/**
* 获取日期所在季度
*
* @param date
* @return
*/
public static int getQuarter(Date date) {
LocalDate localDate = date.toInstant().atZone(ZoneId.systemDefault()).toLocalDate();
int month = localDate.getMonthValue();
if (month <= 3) {
return 1;
} else if (month <= 6) {
return 2;
} else if (month <= 9) {
return 3;
} else {
return 4;
}
}
/**
* 获取两个日期之间的 年份_季度 如传 2022-05-01 10:00:00 2023-05-01 10:00:00 返回
* ["2022_2","2022_3","2022_4","2023_1","2023_2"]
*
* @param startDate
* @param endDate
* @return
*/
public static List<String> getYearQuarterList(LocalDate startDate, LocalDate endDate) {
List<String> yearQuarterList = new ArrayList<>();
LocalDate currentDate = startDate;
while (!currentDate.isAfter(endDate)) {
int year = currentDate.getYear();
int quarter = (currentDate.getMonthValue() - 1) / 3 + 1;
String yearQuarter = year + "_" + quarter;
yearQuarterList.add(yearQuarter);
currentDate = currentDate.plusMonths(3);
}
return yearQuarterList;
}
/**
* @param date 日期
* @return 获取指定日期的最后时分秒
*/
public static Date getStartDate(Date date) {
Calendar calendar = Calendar.getInstance();
calendar.setTime(date);
// 将时分秒,毫秒域清零
calendar.set(Calendar.HOUR_OF_DAY, 0);
calendar.set(Calendar.MINUTE, 0);
calendar.set(Calendar.SECOND, 0);
return calendar.getTime();
}
/**
* @param date 日期
* @return 获取指定日期的最后时分秒
*/
public static Date getLastDate(Date date) {
Calendar calendar = Calendar.getInstance();
calendar.setTime(date);
// 将时分秒,毫秒域清零
calendar.set(Calendar.HOUR_OF_DAY, 23);
calendar.set(Calendar.MINUTE, 59);
calendar.set(Calendar.SECOND, 59);
return calendar.getTime();
}
public static void main(String[] args) {
LocalDate startDate = LocalDate.of(2022, 4, 25);
LocalDate endDate = LocalDate.of(2023, 5, 25);
List<String> yearQuarterList = getYearQuarterList(startDate, endDate);
System.out.println(yearQuarterList);
}
}
framework模块
多数据源配置调整
import com.alibaba.druid.pool.DruidDataSource;
import com.alibaba.druid.spring.boot.autoconfigure.DruidDataSourceBuilder;
import com.alibaba.druid.spring.boot.autoconfigure.properties.DruidStatProperties;
import com.alibaba.druid.util.Utils;
import com.kinggrid.common.enums.DataSourceType;
import com.kinggrid.common.utils.spring.SpringUtils;
import com.kinggrid.framework.config.properties.DruidProperties;
import com.kinggrid.framework.datasource.DynamicDataSource;
import lombok.extern.slf4j.Slf4j;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.web.servlet.FilterRegistrationBean;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import javax.annotation.PostConstruct;
import javax.servlet.*;
import javax.sql.DataSource;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
/**
* druid 配置多数据源
*
* @author Yzx
*/
@Slf4j
@Configuration
public class DruidConfig {
@Bean
@ConfigurationProperties("spring.datasource.druid.master")
public DataSource masterDataSource(DruidProperties druidProperties) {
DruidDataSource dataSource = DruidDataSourceBuilder.create().build();
return druidProperties.dataSource(dataSource);
}
@Bean
@ConfigurationProperties("spring.datasource.druid.slave")
@ConditionalOnProperty(
prefix = "spring.datasource.druid.slave",
name = "enabled",
havingValue = "true")
public DataSource slaveDataSource(DruidProperties druidProperties) {
DruidDataSource dataSource = DruidDataSourceBuilder.create().build();
return druidProperties.dataSource(dataSource);
}
@Bean(name = "dynamicDataSource")
@Primary
public DynamicDataSource dataSource(DataSource masterDataSource) {
Map<Object, Object> targetDataSources = new HashMap<>();
targetDataSources.put(DataSourceType.MASTER.name(), masterDataSource);
setDataSource(targetDataSources, DataSourceType.SLAVE.name(), "slaveDataSource");
setDataSource(targetDataSources, DataSourceType.SHARDING.name(), "shardingDataSource");
return new DynamicDataSource(masterDataSource, targetDataSources);
}
/**
* 设置数据源
*
* @param targetDataSources 备选数据源集合
* @param sourceName 数据源名称
* @param beanName bean名称
*/
public void setDataSource(
Map<Object, Object> targetDataSources, String sourceName, String beanName) {
try {
DataSource dataSource = SpringUtils.getBean(beanName);
targetDataSources.put(sourceName, dataSource);
} catch (Exception e) {
log.error(e.getMessage());
}
}
/** 解决druid 日志报错:discard long time none received connection:xxx */
@PostConstruct
public void setProperties() {
System.setProperty("druid.mysql.usePingMethod", "false");
}
/** 去除监控页面底部的广告 */
@SuppressWarnings({"rawtypes", "unchecked"})
@Bean
@ConditionalOnProperty(
name = "spring.datasource.druid.statViewServlet.enabled",
havingValue = "true")
public FilterRegistrationBean removeDruidFilterRegistrationBean(
DruidStatProperties properties) {
// 获取web监控页面的参数
DruidStatProperties.StatViewServlet config = properties.getStatViewServlet();
// 提取common.js的配置路径
String pattern = config.getUrlPattern() != null ? config.getUrlPattern() : "/druid/*";
String commonJsPattern = pattern.replaceAll("\\*", "js/common.js");
final String filePath = "support/http/resources/js/common.js";
// 创建filter进行过滤
Filter filter =
new Filter() {
@Override
public void init(javax.servlet.FilterConfig filterConfig)
throws ServletException {}
@Override
public void doFilter(
ServletRequest request, ServletResponse response, FilterChain chain)
throws IOException, ServletException {
chain.doFilter(request, response);
// 重置缓冲区,响应头不会被重置
response.resetBuffer();
// 获取common.js
String text = Utils.readFromResource(filePath);
// 正则替换banner, 除去底部的广告信息
text = text.replaceAll("<a.*?banner\"></a><br/>", "");
text = text.replaceAll("powered.*?shrek.wang</a>", "");
response.getWriter().write(text);
}
@Override
public void destroy() {}
};
FilterRegistrationBean registrationBean = new FilterRegistrationBean();
registrationBean.setFilter(filter);
registrationBean.addUrlPatterns(commonJsPattern);
return registrationBean;
}
}
新增sharding-jdbc配置类
import org.apache.shardingsphere.api.config.sharding.KeyGeneratorConfiguration;
import org.apache.shardingsphere.api.config.sharding.ShardingRuleConfiguration;
import org.apache.shardingsphere.api.config.sharding.TableRuleConfiguration;
import org.apache.shardingsphere.api.config.sharding.strategy.StandardShardingStrategyConfiguration;
import org.apache.shardingsphere.shardingjdbc.api.ShardingDataSourceFactory;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import javax.sql.DataSource;
import java.sql.SQLException;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
/** @Classname ShardingDataSourceConfig @Description @Date 2023-5-23 15:38 @Author 无朽 */
@Configuration
public class ShardingDataSourceConfig {
@Bean(name = "shardingDataSource")
public DataSource shardingDataSource(
@Qualifier("masterDataSource") DataSource masterDataSource,
@Qualifier("orderTableRuleConfiguration")
TableRuleConfiguration orderTableRuleConfiguration,
@Qualifier("fileTableRuleConfiguration")
TableRuleConfiguration fileTableRuleConfiguration)
throws SQLException {
Map<String, DataSource> dataSourceMap = new HashMap<>(1);
dataSourceMap.put("masterDataSource", masterDataSource);
// 配置分片规则
ShardingRuleConfiguration shardingRuleConfig = new ShardingRuleConfiguration();
shardingRuleConfig.getTableRuleConfigs().add(orderTableRuleConfiguration);
shardingRuleConfig.getTableRuleConfigs().add(fileTableRuleConfiguration);
// 获取数据源对象
DataSource dataSource =
ShardingDataSourceFactory.createDataSource(
dataSourceMap, shardingRuleConfig, getProperties());
return dataSource;
}
@Bean
public TableRuleConfiguration orderTableRuleConfiguration(
OrderTablePreciseShardingAlgorithm orderTablePreciseShardingAlgorithm,
OrderTableRangeShardingAlgorithm orderTableRangeShardingAlgorithm) {
TableRuleConfiguration orderTableRuleConfig =
new TableRuleConfiguration(
"kg_order_info", "masterDataSource.kg_order_info_${2023..2050}_$->{1..4}");
// 设置数据表策略,传入的是business_date
orderTableRuleConfig.setTableShardingStrategyConfig(
new StandardShardingStrategyConfiguration(
"business_date",
orderTablePreciseShardingAlgorithm,
orderTableRangeShardingAlgorithm));
// 分布式主键
orderTableRuleConfig.setKeyGeneratorConfig(
new KeyGeneratorConfiguration("KgShardingKey", "id"));
return orderTableRuleConfig;
}
@Bean
public TableRuleConfiguration fileTableRuleConfiguration(
FileTablePreciseShardingAlgorithm fileTablePreciseShardingAlgorithm) {
TableRuleConfiguration orderTableRuleConfig =
new TableRuleConfiguration(
"kg_upload_file",
"masterDataSource.kg_upload_file_${2023..2050}_$->{1..4}");
// 设置数据表策略,传入的是business_date
orderTableRuleConfig.setTableShardingStrategyConfig(
new StandardShardingStrategyConfiguration("id", fileTablePreciseShardingAlgorithm));
// 分布式主键
orderTableRuleConfig.setKeyGeneratorConfig(
new KeyGeneratorConfiguration("KgShardingKey", "id"));
return orderTableRuleConfig;
}
/** 系统参数配置 */
private Properties getProperties() {
Properties shardingProperties = new Properties();
shardingProperties.put("sql.show", true);
return shardingProperties;
}
}
订单表分配规则
存在根据订单号及订单时间精准查询情况与根据订单时间范围查询列表情况
所以分配规则会有两个, 精准查询与范围查询
存在历史数据情况, 早于分片上线时间的订单统一放到某张表中处理, 分配规则可根据自身业务情况做调整
精准查询规则
import com.kinggrid.common.utils.DateUtils;
import lombok.extern.slf4j.Slf4j;
import org.apache.shardingsphere.api.sharding.standard.PreciseShardingAlgorithm;
import org.apache.shardingsphere.api.sharding.standard.PreciseShardingValue;
import org.springframework.stereotype.Component;
import java.util.Collection;
import java.util.Date;
/** @Classname OrderTableShardingAlgorithm @Description @Date 2023-5-26 10:33 @Author 无朽 */
@Slf4j
@Component
public class OrderTablePreciseShardingAlgorithm implements PreciseShardingAlgorithm<Date> {
private static final Date START_DATE =
DateUtils.dateTime(DateUtils.YYYY_MM_DD_HH_MM_SS, "2023-01-01 00:00:00");
@Override
public String doSharding(
Collection<String> collection, PreciseShardingValue<Date> preciseShardingValue) {
StringBuilder orderTbName =
new StringBuilder(preciseShardingValue.getLogicTableName()).append("_");
Date shardingValue = preciseShardingValue.getValue();
// 早期数据一律查询第一张表
if (START_DATE.after(shardingValue)) {
shardingValue = START_DATE;
}
int year = DateUtils.getYear(shardingValue);
int quarter = DateUtils.getQuarter(shardingValue);
orderTbName.append(year).append("_").append(quarter);
String tableName = orderTbName.toString();
log.info("orderTbName:{}", tableName);
for (String each : collection) {
if (each.equals(tableName)) {
log.info("orderTbName:{}", each);
return each;
}
}
throw new IllegalArgumentException();
}
}
范围查询规则
import cn.hutool.core.collection.CollectionUtil;
import com.google.common.collect.Range;
import com.kinggrid.common.utils.DateUtils;
import lombok.extern.slf4j.Slf4j;
import org.apache.shardingsphere.api.sharding.standard.RangeShardingAlgorithm;
import org.apache.shardingsphere.api.sharding.standard.RangeShardingValue;
import org.springframework.stereotype.Component;
import java.time.LocalDate;
import java.time.ZoneId;
import java.util.Collection;
import java.util.Date;
import java.util.List;
import java.util.Objects;
import java.util.stream.Collectors;
/** @Classname OrderTableShardingAlgorithm @Description @Date 2023-5-26 10:33 @Author 无朽 */
@Slf4j
@Component
public class OrderTableRangeShardingAlgorithm implements RangeShardingAlgorithm<Date> {
private static final Date START_DATE =
DateUtils.dateTime(DateUtils.YYYY_MM_DD_HH_MM_SS, "2023-01-01 00:00:00");
private static final int MIN_YEAR = 2023;
@Override
public Collection<String> doSharding(
Collection<String> collection, RangeShardingValue<Date> rangeShardingValue) {
Range<Date> valueRange = rangeShardingValue.getValueRange();
boolean hasLowerBound = valueRange.hasLowerBound();
Date startDate = null;
Date endDate = null;
if (hasLowerBound) {
startDate = valueRange.lowerEndpoint();
}
boolean hasUpperBound = valueRange.hasUpperBound();
if (hasUpperBound) {
endDate = valueRange.upperEndpoint();
}
List<String> datePrefix = getDatePrefix(startDate, endDate);
if (CollectionUtil.isEmpty(datePrefix)) {
throw new IllegalArgumentException();
}
String orderTbName = rangeShardingValue.getLogicTableName() + "_";
return datePrefix.stream().map(item -> orderTbName + item).collect(Collectors.toList());
}
/**
* 获取两段时间内的时间前缀
*
* @param startDateStr
* @param endDateStr
* @return
*/
private List<String> getDatePrefix(Date startDateStr, Date endDateStr) {
if (Objects.isNull(startDateStr)) {
startDateStr = START_DATE;
}
LocalDate startDate = startDateStr.toInstant().atZone(ZoneId.systemDefault()).toLocalDate();
// 查询日期不能早于2023年
if (startDate.getYear() < MIN_YEAR) {
startDate = START_DATE.toInstant().atZone(ZoneId.systemDefault()).toLocalDate();
}
if (Objects.isNull(endDateStr)) {
endDateStr = DateUtils.getNowDate();
}
LocalDate endDate = endDateStr.toInstant().atZone(ZoneId.systemDefault()).toLocalDate();
return DateUtils.getYearQuarterList(startDate, endDate);
}
}
文件表分配规则
由于文件表只会根据主键id来做查询, 所以只需要一个精准分配查询即可, 根据雪花id反推出时间进行分配
import com.kinggrid.common.utils.DateUtils;
import com.kinggrid.common.utils.IdWorker;
import lombok.extern.slf4j.Slf4j;
import org.apache.shardingsphere.api.sharding.standard.PreciseShardingAlgorithm;
import org.apache.shardingsphere.api.sharding.standard.PreciseShardingValue;
import org.springframework.stereotype.Component;
import java.util.Collection;
import java.util.Date;
/** @Classname OrderTableShardingAlgorithm @Description @Date 2023-5-26 10:33 @Author 无朽 */
@Slf4j
@Component
public class FileTablePreciseShardingAlgorithm implements PreciseShardingAlgorithm<Long> {
/** 初始表, 处理早期非雪花算法id */
private static final String INITIAL_TABLE = "2023_1";
/** 雪花id长度 */
private static final int SNOWFLAKE_LENGTH = 18;
@Override
public String doSharding(
Collection<String> collection, PreciseShardingValue<Long> preciseShardingValue) {
StringBuilder orderTbName =
new StringBuilder(preciseShardingValue.getLogicTableName()).append("_");
Long shardingValue = preciseShardingValue.getValue();
log.info("shardingValue==={}", shardingValue);
// 早期非雪花id直接查询最早的表
if (shardingValue.toString().length() < SNOWFLAKE_LENGTH) {
return orderTbName.append(INITIAL_TABLE).toString();
}
Date date = IdWorker.inverseDate(shardingValue);
int year = DateUtils.getYear(date);
int quarter = DateUtils.getQuarter(date);
orderTbName.append(year).append("_").append(quarter);
String tableName = orderTbName.toString();
log.info("fileTbName:{}", tableName);
for (String each : collection) {
if (each.equals(tableName)) {
log.info("fileTbName:{}", each);
return each;
}
}
throw new IllegalArgumentException();
}
}
自定义雪花算法
主键使用自定义的雪花算法
import com.kinggrid.common.utils.IdWorker;
import lombok.Getter;
import lombok.Setter;
import org.apache.shardingsphere.spi.keygen.ShardingKeyGenerator;
import java.util.Properties;
import java.util.concurrent.atomic.AtomicLong;
/** @Classname SnowflakeKeyGenerator @Description @Date 2023-5-31 16:01 @Author 无朽 */
@Setter
@Getter
public class SnowflakeKeyGenerator implements ShardingKeyGenerator {
private AtomicLong atomic = new AtomicLong(0);
private Properties properties;
@Override
public Comparable<?> generateKey() {
IdWorker idWorker = new IdWorker(0, 0);
return idWorker.nextId();
}
@Override
public String getType() {
return "KgShardingKey";
}
}
加载自定义雪花算法sharding-jdbc使用的是 SPI机制。
- 在该模块的resource目录下新建META-INF/services文件夹
- 创建文件名为org.apache.shardingsphere.spi.keygen.ShardingKeyGenerator的文件
- 文件中写上SnowflakeKeyGenerator类的全类名
雪花算法精度问题处理
由于生成的ID长度超过了long的精度, 在返回给前端时会存在精度丢失问题.
可以参考另一篇文章: 链接: 生成的雪花算法ID前端接收到的不一致问题处理
本次使用的是增加配置类的方式处理
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.module.SimpleModule;
import com.fasterxml.jackson.databind.ser.std.ToStringSerializer;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.http.converter.json.Jackson2ObjectMapperBuilder;
/** @Classname JacksonConfig @Description @Date 2023-5-31 17:15 @Author 无朽 */
@Configuration
public class JacksonConfig {
@Bean
@Primary
@ConditionalOnMissingBean(ObjectMapper.class)
public ObjectMapper jacksonObjectMapper(Jackson2ObjectMapperBuilder builder) {
ObjectMapper objectMapper = builder.createXmlMapper(false).build();
// 全局配置序列化返回 JSON 处理
SimpleModule simpleModule = new SimpleModule();
// JSON Long ==> String
simpleModule.addSerializer(Long.class, ToStringSerializer.instance);
objectMapper.registerModule(simpleModule);
return objectMapper;
}
}
至此, 框架集成sharding-jdbc已完成, 可在业务上进行使用
业务使用
@DataSource(DataSourceType.SHARDING)文章来源:https://www.toymoban.com/news/detail-481468.html
- 在对应需要分表操作的service方法上加上注解
- 如整个service都需要分表操作, 可直接将该注解加载类上
- 如项目使用的是mybatis plus自带的方法, 需在service上重写覆盖该方法, 并加上注解
- jdbc不支持项需根据业务单独做处理, 如使用date_sub函数查询昨天数据, 在service层做处理,计算出对应时间范围来做查询
@Service
@DataSource(DataSourceType.SHARDING)
public class KgOrderInfoServiceImpl extends ServiceImpl<KgOrderInfoMapper, KgOrderInfo>
implements IKgOrderInfoService {
@Override
public boolean save(KgOrderInfo entity) {
return super.save(entity);
}
@Override
public List<KgOrderInfo> getByYesterday(String areaCode) {
Calendar cal = Calendar.getInstance();
cal.add(Calendar.DATE, -1);
Date getByYesterday = cal.getTime();
return baseMapper.getByYesterday(
areaCode,
DateUtils.getStartDate(getByYesterday),
DateUtils.getLastDate(getByYesterday));
}
}
创作不易, 如对你有帮忙, 请帮忙点赞收藏加关注!谢谢!文章来源地址https://www.toymoban.com/news/detail-481468.html
到了这里,关于若依框架集成sharding-jdbc的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!