当前位置: 首页 > 图灵资讯 > 技术篇> SpringCloud微服务实战——搭建企业级开发框架(二十九):集成对象存储服务MinIO+七牛云+阿里云+腾讯云

SpringCloud微服务实战——搭建企业级开发框架(二十九):集成对象存储服务MinIO+七牛云+阿里云+腾讯云

来源:图灵教育
时间:2023-10-20 17:52:18

  在微服务应用程序中,图片和文件的存储不同于单个应用程序。单个应用程序可以放置在当地读写磁盘文件。微服务应用程序必须使用分布式存储,并将图片和文件存储在服务稳定的分布式存储服务器中。目前,许多云服务提供商提供存储云服务,如阿里云OSS、腾讯云COS、七牛云对象存储Kododo、百度云对象存储BOS等,以及开源对象存储服务器,如FastDFS、MinIO等。  如果我们的框架只支持一种存储服务,那么在以后的扩展或修改中就会有局限性。因此,我们希望定义一个抽象接口,实现我们想要使用的服务。在配置多个服务时,可以选择调用的存储。在这里,云服务选择七牛云,开源服务选择Minio集成,如果需要其他服务,可以自行扩展。  首先,在框架构建之前,我们首先准备环境。以Minio和七牛云为例。Minio的安装非常简单。我们选择Linux安装包进行安装。具体方法见:http://docs.minio.org.cn/docs/,七牛云只需在官网注册,实名认证即可获得10G免费存储容量https://www.qiniu.com/。

一、实现基础底层库

1、在Giteg-Platform中新建gitegggg-platform-dfs (dfs: Distributed File system分布式文件系统)子项目用于定义对象存储服务的抽象接口,新建IDfsBaseservice用于定义文件上传和下载常用接口

/** * 定义分布式文件存储操作接口 * 原则上不允许上传文件物理删除、修改等操作,以保留系统操作记录。 * 修改和删除业务操作文件只是修改关联关系,重新上传文件并与业务关联。 */public interface IDfsBaseService {    /**     * 获取简单的上传凭证     * @param bucket     * @return     */    String uploadToken(String bucket);    /**     * 获取覆盖上传凭证     * @param bucket     * @return     */    String uploadToken(String bucket, String key);    /**     * 创建 bucket     * @param bucket     */    void createBucket(String bucket);    /**     * 通过流上传文件,指定文件名     * @param inputStream     * @param fileName     * @return     */    GitEggDfsFile uploadFile(InputStream inputStream, String fileName);    /**     * 通过流上传文件,指定文件名和bucketet     * @param inputStream     * @param bucket     * @param fileName     * @return     */    GitEggDfsFile uploadFile(InputStream inputStream, String bucket, String fileName);    /**     * 通过文件名获取文件访问链接     * @param fileName     * @return     */    String getFileUrl(String fileName);    /**     * 通过文件名和bucket获取文件访问链接     * @param fileName     * @param bucket     * @return     */    String getFileUrl(String bucket, String fileName);    /**     * 通过文件名和bucket获取文件访问链接,设置有效期     * @param bucket     * @param fileName     * @param duration     * @param unit     * @return     */    String getFileUrl(String bucket, String fileName, int duration, TimeUnit unit);    /**     * 以流的形式下载一个对象     * @param fileName     * @return     */    OutputStream getFileObject(String fileName, OutputStream outputStream);    /**     * 通过文件名和bucket以流的形式下载一个对象     * @param fileName     * @param bucket     * @return     */    OutputStream getFileObject(String bucket, String fileName, OutputStream outputStream);    /**     * 根据文件名删除文件     * @param fileName     */    String removeFile(String fileName);    /**     * 根据文件名删除指定bucket下的文件     * @param bucket     * @param fileName     */    String removeFile(String bucket, String fileName);    /**     * 根据文件列表批量删除文件     * @param fileNames     */    String removeFiles(List<String> fileNames);    /**     * bucket下的文件根据文件列表批量删除     * @param bucket     * @param fileNames     */    String removeFiles(String bucket, List<String> fileNames);}

2、在Giteg-Platform中新建gitegggg-platform-dfs-新建Miniodfsserviceimpl和Miniodfsproperties用于实现IDfsbaservice文件的上传和下载接口

@Data@Component@ConfigurationProperties(prefix = "dfs.minio")public class MinioDfsProperties {    /**     * AccessKey     */    private String accessKey;    /**     * SecretKey     */    private String secretKey;    /**     * 区域,服务器的物理位置需要在MiniO中配置,默认为us-east-1(美国东区1),这也是亚马逊S3的默认区域。     */    private String region;    /**     * Bucket     */    private String bucket;    /**     * 公开或私有     */    private Integer accessControl;    /**     * 上传服务器域名地址     */    private String uploadUrl;    /**     * 前缀文件请求地址     */    private String accessUrlPrefix;    /**     *  上传文件夹前缀     */    private String uploadDirPrefix;}
@Slf4j@AllArgsConstructorpublic class MinioDfsServiceImpl implements IDfsBaseService {    private final MinioClient minioClient;    private final MinioDfsProperties minioDfsProperties;    @Override    public String uploadToken(String bucket) {        return null;    }    @Override    public String uploadToken(String bucket, String key) {        return null;    }    @Override    public void createBucket(String bucket) {        BucketExistsArgs bea = BucketExistsArgs.builder().bucket(bucket).build();        try {            if (!minioClient.bucketExists(bea)) {                MakeBucketArgs mba = MakeBucketArgs.builder().bucket(bucket).build();                minioClient.makeBucket(mba);            }        } catch (ErrorResponseException e) {            e.printStackTrace();        } catch (InsufficientDataException e) {            e.printStackTrace();        } catch (InternalException e) {            e.printStackTrace();        } catch (InvalidKeyException e) {            e.printStackTrace();        } catch (InvalidResponseException e) {            e.printStackTrace();        } catch (IOException e) {            e.printStackTrace();        } catch (NoSuchAlgorithmException e) {            e.printStackTrace();        } catch (ServerException e) {            e.printStackTrace();        } catch (XmlParserException e) {            e.printStackTrace();        }    }    @Override    public GitEggDfsFile uploadFile(InputStream inputStream, String fileName) {        return this.uploadFile(inputStream, minioDfsProperties.getBucket(), fileName);    }    @Override    public GitEggDfsFile uploadFile(InputStream inputStream, String bucket, String fileName) {        GitEggDfsFile dfsFile = new GitEggDfsFile();        try {            dfsFile.setBucket(bucket);            dfsFile.setBucketDomain(minioDfsProperties.getUploadUrl());            dfsFile.setFileUrl(minioDfsProperties.getAccessUrlPrefix());            dfsFile.setEncodedFileName(fileName);            minioClient.putObject(PutObjectArgs.builder()                    .bucket(bucket)                    .stream(inputStream, -1, 5*1024*1024)                    .object(fileName)                    .build());        } catch (ErrorResponseException e) {            e.printStackTrace();        } catch (InsufficientDataException e) {            e.printStackTrace();        } catch (InternalException e) {            e.printStackTrace();        } catch (InvalidKeyException e) {            e.printStackTrace();        } catch (InvalidResponseException e) {            e.printStackTrace();        } catch (IOException e) {            e.printStackTrace();        } catch (NoSuchAlgorithmException e) {            e.printStackTrace();        } catch (ServerException e) {            e.printStackTrace();        } catch (XmlParserException e) {            e.printStackTrace();        }        return dfsFile;    }    @Override    public String getFileUrl(String fileName) {        return this.getFileUrl(minioDfsProperties.getBucket(), fileName);    }    @Override    public String getFileUrl(String bucket, String fileName) {        return this.getFileUrl(bucket, fileName, DfsConstants.DFS_FILE_DURATION, DfsConstants.DFS_FILE_DURATION_UNIT);    }    @Override    public String getFileUrl(String bucket, String fileName, int duration, TimeUnit unit) {        String url = null;        try {            url = minioClient.getPresignedObjectUrl(                    GetPresignedObjectUrlArgs.builder()                            .method(Method.GET)                            .bucket(bucket)                            .object(fileName)                            .expiry(duration, unit)                            .build());        } catch (ErrorResponseException e) {            e.printStackTrace();        } catch (InsufficientDataException e) {            e.printStackTrace();        } catch (InternalException e) {            e.printStackTrace();        } catch (InvalidKeyException e) {            e.printStackTrace();        } catch (InvalidResponseException e) {            e.printStackTrace();        } catch (IOException e) {            e.printStackTrace();        } catch (NoSuchAlgorithmException e) {            e.printStackTrace();        } catch (XmlParserException e) {            e.printStackTrace();        } catch (ServerException e) {            e.printStackTrace();        }        return url;    }    @Override    public OutputStream getFileObject(String fileName, OutputStream outputStream) {        return this.getFileObject(minioDfsProperties.getBucket(), fileName, outputStream);    }    @Override    public OutputStream getFileObject(String bucket, String fileName, OutputStream outputStream) {        BufferedInputStream bis = null;        InputStream stream = null;        try {            stream = minioClient.getObject(                    GetObjectArgs.builder()                            .bucket(bucket)                            .object(fileName)                            .build());            bis = new BufferedInputStream(stream);            IOUtils.copy(bis, outputStream);        } catch (ErrorResponseException e) {            e.printStackTrace();        } catch (InsufficientDataException e) {            e.printStackTrace();        } catch (InternalException e) {            e.printStackTrace();        } catch (InvalidKeyException e) {            e.printStackTrace();        } catch (InvalidResponseException e) {            e.printStackTrace();        } catch (IOException e) {            e.printStackTrace();        } catch (NoSuchAlgorithmException e) {            e.printStackTrace();        } catch (ServerException e) {            e.printStackTrace();        } catch (XmlParserException e) {            e.printStackTrace();        } finally {            if (stream != null) {                try {                    stream.close();                } catch (IOException e) {                    e.printStackTrace();                }            }            if (bis != null) {                try {                    bis.close();                } catch (IOException e) {                    e.printStackTrace();                }            }        }        return outputStream;    }    @Override    public String removeFile(String fileName) {        return this.removeFile(minioDfsProperties.getBucket(), fileName);    }    @Override    public String removeFile(String bucket, String fileName) {        return this.removeFiles(bucket, Collections.singletonList(fileName));    }    @Override    public String removeFiles(List<String> fileNames) {        return this.removeFiles(minioDfsProperties.getBucket(), fileNames);    }    @Override    public String removeFiles(String bucket, List<String> fileNames) {        List<DeleteObject> deleteObject = new ArrayList<>();        if (!CollectionUtils.isEmpty(fileNames))        {            fileNames.stream().forEach(item -> {                deleteObject.add(new DeleteObject(item));            });        }        Iterable<Result<DeleteError>> result = minioClient.removeObjects(RemoveObjectsArgs.builder()                .bucket(bucket)                .objects(deleteObject)                .build());        try {            return JsonUtils.objToJsonIgnoreNull(result);        } catch (Exception e) {            e.printStackTrace();        }        return null;    }}

3、在Giteg-Platform中新建gitegggg-platform-dfs-新建Qiniudfsserviceimpl和Qiniudfsproperties用于实现IDfsbaservice文件的上传和下载接口

@Data@Component@ConfigurationProperties(prefix = "dfs.qiniu")public class QiNiuDfsProperties {    /**     * AccessKey     */    private String accessKey;    /**     * SecretKey     */    private String secretKey;    /**     * 七牛云机房     */    private String region;    /**     * Bucket 存储块     */    private String bucket;    /**     * 公开或私有     */    private Integer accessControl;    /**     * 上传服务器域名地址     */    private String uploadUrl;    /**     * 前缀文件请求地址     */    private String accessUrlPrefix;    /**     * 上传文件夹前缀     */    private String uploadDirPrefix;}
@Slf4j@AllArgsConstructorpublic class QiNiuDfsServiceImpl implements IDfsBaseService {    private final Auth auth;    private final UploadManager uploadManager;    private final BucketManager bucketManager;    private final QiNiuDfsProperties qiNiuDfsProperties;    /**     *     * @param bucket     * @return     */    @Override    public String uploadToken(String bucket) {        Auth auth = Auth.create(qiNiuDfsProperties.getAccessKey(), qiNiuDfsProperties.getSecretKey());        String upToken = auth.uploadToken(bucket);        return upToken;    }    /**     *     * @param bucket     * @param key     * @return     */    @Override    public String uploadToken(String bucket, String key) {        Auth auth = Auth.create(qiNiuDfsProperties.getAccessKey(), qiNiuDfsProperties.getSecretKey());        String upToken = auth.uploadToken(bucket, key);        return upToken;    }    @Override    public void createBucket(String bucket) {        try {            String[] buckets = bucketManager.buckets();            if (!ArrayUtil.contains(buckets, bucket)) {                bucketManager.createBucket(bucket, qiNiuDfsProperties.getRegion());            }        } catch (QiniuException e) {            e.printStackTrace();        }    }    /**     *     * @param inputStream     * @param fileName     * @return     */    @Override    public GitEggDfsFile uploadFile(InputStream inputStream, String fileName) {        return this.uploadFile(inputStream, qiNiuDfsProperties.getBucket(), fileName);    }    /**     *     * @param inputStream     * @param bucket     * @param fileName     * @return     */    @Override    public GitEggDfsFile uploadFile(InputStream inputStream, String bucket, String fileName) {        GitEggDfsFile dfsFile = null;        ////默认情况下,以文件内容的hash值作为文件名称        String key = null;        if (!StringUtils.isEmpty(fileName))        {            key = fileName;        }        try {            String upToken = auth.uploadToken(bucket);            Response response = uploadManager.put(inputStream, key, upToken,null, null);            ////分析上传成功的结果            dfsFile = JsonUtils.jsonToPojo(response.bodyString(), GitEggDfsFile.class);            if (dfsFile != null) {                dfsFile.setBucket(bucket);                dfsFile.setBucketDomain(qiNiuDfsProperties.getUploadUrl());                dfsFile.setFileUrl(qiNiuDfsProperties.getAccessUrlPrefix());                dfsFile.setEncodedFileName(fileName);            }        } catch (QiniuException ex) {            Response r = ex.response;            log.error(r.toString());            try {                log.error(r.bodyString());            } catch (QiniuException ex2) {                log.error(ex2.toString());            }        } catch (Exception e) {            log.error(e.toString());        }        return dfsFile;    }    @Override    public String getFileUrl(String fileName) {        return this.getFileUrl(qiNiuDfsProperties.getBucket(), fileName);    }    @Override    public String getFileUrl(String bucket, String fileName) {        return this.getFileUrl(bucket, fileName, DfsConstants.DFS_FILE_DURATION, DfsConstants.DFS_FILE_DURATION_UNIT);    }    @Override    public String getFileUrl(String bucket, String fileName, int duration, TimeUnit unit) {        String finalUrl = null;        try {            Integer accessControl = qiNiuDfsProperties.getAccessControl();            if (accessControl != null && DfsConstants.DFS_FILE_PRIVATE == accessControl.intValue()) {                String encodedFileName = URLEncoder.encode(fileName, "utf-8").replace("+", "%20");                String publicUrl = String.format("%s/%s", qiNiuDfsProperties.getAccessUrlPrefix(), encodedFileName);                String accessKey = qiNiuDfsProperties.getAccessKey();                String secretKey = qiNiuDfsProperties.getSecretKey();                Auth auth = Auth.create(accessKey, secretKey);                long expireInSeconds = unit.toSeconds(duration);                finalUrl = auth.privateDownloadUrl(publicUrl, expireInSeconds);            }            else {                finalUrl = String.format("%s/%s", qiNiuDfsProperties.getAccessUrlPrefix(), fileName);            }        } catch (UnsupportedEncodingException e) {            e.printStackTrace();        }        return finalUrl;    }    @Override    public OutputStream getFileObject(String fileName, OutputStream outputStream) {        return this.getFileObject(qiNiuDfsProperties.getBucket(), fileName, outputStream);    }    @Override    public OutputStream getFileObject(String bucket, String fileName, OutputStream outputStream) {        URL url = null;        HttpURLConnection conn = null;        BufferedInputStream bis = null;        try {            String path =  this.getFileUrl(bucket, fileName, DfsConstants.DFS_FILE_DURATION, DfsConstants.DFS_FILE_DURATION_UNIT);            url = new URL(path);            conn = (HttpURLConnection)url.openConnection();            //设置超时间            conn.setConnectTimeout(DfsConstants.DOWNLOAD_TIMEOUT);            ///防止屏蔽程序捕获并返回403错误            conn.setRequestProperty("User-Agent", "Mozilla/4.0 (compatible; MSIE 5.0; Windows NT; DigExt)");            conn.connect();            //获得输入流            bis = new BufferedInputStream(conn.getInputStream());            IOUtils.copy(bis, outputStream);        } catch (Exception e) {            log.error("异常阅读网络文件:" + fileName);        }        finally {            conn.disconnect();            if (bis != null) {                try {                    bis.close();                } catch (IOException e) {                    e.printStackTrace();                }            }        }        return outputStream;    }    /**     *     * @param fileName     * @return     */    @Override    public String removeFile(String fileName) {        return this.removeFile( qiNiuDfsProperties.getBucket(), fileName);    }    /**     *     * @param bucket     * @param fileName     * @return     */    @Override    public String removeFile(String bucket, String fileName) {        String resultStr = null;        try {            Response response = bucketManager.delete(bucket, fileName);            resultStr = JsonUtils.objToJson(response);        } catch (QiniuException e) {            Response r = e.response;            log.error(r.toString());            try {                log.error(r.bodyString());            } catch (QiniuException ex2) {                log.error(ex2.toString());            }        } catch (Exception e) {            log.error(e.toString());        }        return resultStr;    }    /**     *     * @param fileNames     * @return     */    @Override    public String removeFiles(List<String> fileNames) {        return this.removeFiles(qiNiuDfsProperties.getBucket(), fileNames);    }    /**     *     * @param bucket     * @param fileNames     * @return     */    @Override    public String removeFiles(String bucket, List<String> fileNames) {        String resultStr = null;        try {            if (!CollectionUtils.isEmpty(fileNames) && fileNames.size() > GitEggConstant.Number.THOUSAND)            {                throw new BusinessException("单批量请求的文件数量不得超过1万份");            }            BucketManager.BatchOperations batchOperations = new BucketManager.BatchOperations();            batchOperations.addDeleteOp(bucket, (String [])fileNames.toArray());            Response response = bucketManager.batch(batchOperations);            BatchStatus[] batchStatusList = response.jsonToObject(BatchStatus[].class);            resultStr = JsonUtils.objToJson(batchStatusList);        } catch (QiniuException ex) {            log.error(ex.response.toString());        } catch (Exception e) {            log.error(e.toString());        }        return resultStr;    }}

4、在Giteg-Platform中新建gitegggg-platform-dfs-starter子工程用于集成所有文件上传下载子工程,方便业务统一引入所有实现

<?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0"         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">    <parent>        <artifactId>GitEgg-Platform</artifactId>        <groupId>com.gitegg.platform</groupId>        <version>1.0-SNAPSHOT</version>    </parent>    <modelVersion>4.0.0</modelVersion>    <artifactId>gitegg-platform-dfs-starter</artifactId>    <name>${project.artifactId}</name>    <packaging>jar</packaging>    <dependencies>        <!-- gitegg 分布式文件自定义扩展-minio -->        <dependency>            <groupId>com.gitegg.platform</groupId>            <artifactId>gitegg-platform-dfs-minio</artifactId>        </dependency>        <!-- gitegg 分布式文件自定义扩展-七牛云 -->        <dependency>            <groupId>com.gitegg.platform</groupId>            <artifactId>gitegg-platform-dfs-qiniu</artifactId>        </dependency>    </dependencies></project>

5、gitegg-platform-在bom中添加文件存储相关依赖

            <!-- gitegg 分布式文件自定义扩展 -->            <dependency>                <groupId>com.gitegg.platform</groupId>                <artifactId>gitegg-platform-dfs</artifactId>                <version>${gitegg.project.version}</version>            </dependency>            <!-- gitegg 分布式文件自定义扩展-minio -->            <dependency>                <groupId>com.gitegg.platform</groupId>                <artifactId>gitegg-platform-dfs-minio</artifactId>                <version>${gitegg.project.version}</version>            </dependency>            <!-- gitegg 分布式文件自定义扩展-七牛云 -->            <dependency>                <groupId>com.gitegg.platform</groupId>                <artifactId>gitegg-platform-dfs-qiniu</artifactId>                <version>${gitegg.project.version}</version>            </dependency>            <!-- gitegg 分布式文件自定义扩展-starter -->            <dependency>                <groupId>com.gitegg.platform</groupId>                <artifactId>gitegg-platform-dfs-starter</artifactId>                <version>${gitegg.project.version}</version>            </dependency>           <!-- minio文件存储服务服务 https://mvnrepository.com/artifact/io.minio/minio -->            <dependency>                <groupId>io.minio</groupId>                <artifactId>minio</artifactId>                <version>${dfs.minio.version}</version>            </dependency>            <!--七牛云文件存储服务-->            <dependency>                <groupId>com.qiniu</groupId>                <artifactId>qiniu-java-sdk</artifactId>                <version>${dfs.qiniu.version}</version>            </dependency>
二、实现业务功能

将分布式文件存储功能作为系统扩展功能放置在gitegg中-service-在extension工程中,首先需要将其分为几个模块:

  • 模块基本配置文件服务器
  • 上传和下载文件记录模块(下载只记录私人文件,不需要记录公共可访问文件)
  • 实现前端访问下载

1、存储文件服务器相关配置的新文件服务器配置表,定义表结构,使用代码生成工具生成添加、删除和更改代码。

CREATE TABLE `t_sys_dfs`  (  `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键',  `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT “租户id”,  `dfs_type` bigint(20) NULL DEFAULT NULL COMMENT 分布式存储分类,  `dfs_code` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 分布式存储编号,  `access_url_prefix` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4__general_ci NULL DEFAULT NULL COMMENT “文件访问地址前缀”,  `upload_url` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 分布式存储上传地址,  `bucket` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 空间名称,  `app_id` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 使用ID,  `region` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '区域',  `access_key` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 'accessKey',  `secret_key` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 'secretKey',  `dfs_default` tinyint(2) NOT NULL DEFAULT 0 COMMENT 是否默认存储 0否,1是',  `dfs_status` tinyint(2) NOT NULL DEFAULT 1 COMMENT '状态 0禁用,1 启用',  `access_control` tinyint(2) NOT NULL DEFAULT 0 COMMENT '访问控制 私有,1公开,  `comments` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '备注',  `create_time` datetime(0) NULL DEFAULT NULL COMMENT 创造时间,  `creator` bigint(20) NULL DEFAULT NULL COMMENT 创造者,  `update_time` datetime(0) NULL DEFAULT NULL COMMENT “更新时间”,  `operator` bigint(20) NULL DEFAULT NULL COMMENT "更新者",  `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '1:删除 0:不删除',  PRIMARY KEY (`id`) USING BTREE) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = “分布式存储配置表” ROW_FORMAT = DYNAMIC;

2、新建DfsQiniuFactory和DfsminioFactory接口实现工厂类,用于实例化所需的接口实现类,根据当前用户的选择

/** * 七牛云上传服务接口工厂 */public class DfsQiniuFactory {    public static IDfsBaseService getDfsBaseService(DfsDTO dfsDTO) {        Auth auth = Auth.create(dfsDTO.getAccessKey(), dfsDTO.getSecretKey());        Configuration cfg = new Configuration(Region.autoRegion());        UploadManager uploadManager = new UploadManager(cfg);        BucketManager bucketManager = new BucketManager(auth, cfg);        QiNiuDfsProperties qiNiuDfsProperties = new QiNiuDfsProperties();        qiNiuDfsProperties.setAccessKey(dfsDTO.getAccessKey());        qiNiuDfsProperties.setSecretKey(dfsDTO.getSecretKey());        qiNiuDfsProperties.setRegion(dfsDTO.getRegion());        qiNiuDfsProperties.setBucket(dfsDTO.getBucket());        qiNiuDfsProperties.setUploadUrl(dfsDTO.getUploadUrl());        qiNiuDfsProperties.setAccessUrlPrefix(dfsDTO.getAccessUrlPrefix());        qiNiuDfsProperties.setAccessControl(dfsDTO.getAccessControl());        return new QiNiuDfsServiceImpl(auth, uploadManager, bucketManager, qiNiuDfsProperties);    }}
/** * MINIO上传服务接口工厂 */public class DfsMinioFactory {    public static IDfsBaseService getDfsBaseService(DfsDTO dfsDTO) {        MinioClient minioClient =                MinioClient.builder()                        .endpoint(dfsDTO.getUploadUrl())                        .credentials(dfsDTO.getAccessKey(), dfsDTO.getSecretKey()).build();;        MinioDfsProperties minioDfsProperties = new MinioDfsProperties();        minioDfsProperties.setAccessKey(dfsDTO.getAccessKey());        minioDfsProperties.setSecretKey(dfsDTO.getSecretKey());        minioDfsProperties.setRegion(dfsDTO.getRegion());        minioDfsProperties.setBucket(dfsDTO.getBucket());        minioDfsProperties.setUploadUrl(dfsDTO.getUploadUrl());        minioDfsProperties.setAccessUrlPrefix(dfsDTO.getAccessUrlPrefix());        minioDfsProperties.setAccessControl(dfsDTO.getAccessControl());        return new MinioDfsServiceImpl(minioClient, minioDfsProperties);    }}

3、新建DfsFactory工厂,添加@component容器管理(默认单例),根据系统用户配置生成并缓存相应的上传下载接口

/** * 根据系统用户配置,DfsFactory工厂类,生成并缓存相应的上传和下载接口 */@Componentpublic class DfsFactory {    /**     * DfsService 缓存     */    private final static Map<Long, IDfsBaseService> dfsBaseServiceMap = new ConcurrentHashMap<>();    /**     * 获取 DfsService     *     * @param dfsDTO 分布式存储配置     * @return dfsService     */    public IDfsBaseService getDfsBaseService(DfsDTO dfsDTO) {        ///根据dfsid获取相应的分布式存储服务接口,dfsid是唯一的,每个租户都有自己的dfsid        Long dfsId = dfsDTO.getId();        IDfsBaseService dfsBaseService = dfsBaseServiceMap.get(dfsId);        if (null == dfsBaseService) {            Class cls = null;            try {                cls = Class.forName(DfsFactoryClassEnum.getValue(String.valueOf(dfsDTO.getDfsType())));                Method staticMethod = cls.getDeclaredMethod(DfsConstants.DFS_SERVICE_FUNCTION, DfsDTO.class);                dfsBaseService = (IDfsBaseService) staticMethod.invoke(cls, dfsDTO);                dfsBaseServiceMap.put(dfsId, dfsBaseService);            } catch (ClassNotFoundException | NoSuchMethodException e) {                e.printStackTrace();            } catch (IllegalAccessException e) {                e.printStackTrace();            } catch (InvocationTargetException e) {                e.printStackTrace();            }        }        return dfsBaseService;    }}

4、DfsFactoryClasenum用于DfsFactory 工厂类通过反射实例化对应文件服务器的接口实现

/** * @ClassName: DfsFactoryClassEnum * @Description: 分布式存储工厂类型枚举 ,因为dfs存储了数据字典表的id,所以在这里省略了一次数据库查询,因此,使用数据字典的id * @author GitEgg * @date 2020年09月19日 下午11:49:45 */public enum DfsFactoryClassEnum {    /**     * MINIO MINIO     */    MINIO("2", "com.gitegg.service.extension.dfs.factory.DfsMinioFactory"),    /**     * 七牛云Kodo QINIUYUN_KODO     */    QI_NIU("3", "com.gitegg.service.extension.dfs.factory.DfsQiniuFactory"),    /**     * 阿里云OSS ALIYUN_OSS     */    ALI_YUN("4", "com.gitegg.service.extension.dfs.factory.DfsAliyunFactory"),    /**     * 腾讯云COS TENCENT_COS     */    TENCENT("5", "com.gitegg.service.extension.dfs.factory.DfsTencentFactory");    public String code;    public String value;    DfsFactoryClassEnum(String code, String value) {        this.code = code;        this.value = value;    }    public static String getValue(String code) {        DfsFactoryClassEnum[] smsFactoryClassEnums = values();        for (DfsFactoryClassEnum smsFactoryClassEnum : smsFactoryClassEnums) {            if (smsFactoryClassEnum.getCode().equals(code)) {                return smsFactoryClassEnum.getValue();            }        }        return null;    }    public String getCode() {        return code;    }    public void setCode(String code) {        this.code = code;    }    public String getValue() {        return value;    }    public void setValue(String value) {        this.value = value;    }}

5、新建IGiteggdfsservice接口,上传和下载业务需要的文件

/** * 上传和下载业务文件接口 * */public interface IGitEggDfsService {    /**     * 上传获取文件 token     * @param dfsCode     * @return     */    String uploadToken(String dfsCode);    /**     * 上传文件     *     * @param dfsCode     * @param file     * @return     */    GitEggDfsFile uploadFile(String dfsCode, MultipartFile file);    /**     * 获取文件访问链接     * @param dfsCode     * @param fileName     * @return     */    String getFileUrl(String dfsCode, String fileName);    /**     * 下载文件     * @param dfsCode     * @param fileName     * @return     */    OutputStream downloadFile(String dfsCode, String fileName, OutputStream outputStream);}

6、新建IGiteggdfsservice接口实现类Giteggdfsserviceimpl,上传和下载业务所需的文件

@Slf4j@Service@RequiredArgsConstructor(onConstructor_ = @Autowired)public class GitEggDfsServiceImpl implements IGitEggDfsService {    private final DfsFactory dfsFactory;    private final IDfsService dfsService;    private final IDfsFileService dfsFileService;    @Override    public String uploadToken(String dfsCode) {        QueryDfsDTO queryDfsDTO = new QueryDfsDTO();        queryDfsDTO.setDfsCode(dfsCode);        DfsDTO dfsDTO = dfsService.queryDfs(queryDfsDTO);        IDfsBaseService dfsBaseService = dfsFactory.getDfsBaseService(dfsDTO);        String token = dfsBaseService.uploadToken(dfsDTO.getBucket());        return token;    }    @Override    public GitEggDfsFile uploadFile(String dfsCode, MultipartFile file) {        QueryDfsDTO queryDfsDTO = new QueryDfsDTO();        DfsDTO dfsDTO = null;        // 若上传时没有选择存储方式,然后取默认存储模式        if(StringUtils.isEmpty(dfsCode)) {            queryDfsDTO.setDfsDefault(GitEggConstant.ENABLE);        }        else {            queryDfsDTO.setDfsCode(dfsCode);        }        GitEggDfsFile gitEggDfsFile = null;        DfsFile dfsFile = new DfsFile();        try {            dfsDTO = dfsService.queryDfs(queryDfsDTO);            IDfsBaseService dfsFileService = dfsFactory.getDfsBaseService(dfsDTO);            //获取文件名            String originalName = file.getOriginalFilename();            //获取文件后缀            String extension = FilenameUtils.getExtension(originalName);            String hash = Etag.stream(file.getInputStream(), file.getSize());            String fileName = hash + "." + extension;            // 上传记录保存文件            dfsFile.setDfsId(dfsDTO.getId());            dfsFile.setOriginalName(originalName);            dfsFile.setFileName(fileName);            dfsFile.setFileExtension(extension);            dfsFile.setFileSize(file.getSize());            dfsFile.setFileStatus(GitEggConstant.ENABLE);            ///上传执行文件操作操作操作            gitEggDfsFile = dfsFileService.uploadFile(file.getInputStream(), fileName);            if (gitEggDfsFile != null)            {                gitEggDfsFile.setFileName(originalName);                gitEggDfsFile.setKey(hash);                gitEggDfsFile.setHash(hash);                gitEggDfsFile.setFileSize(file.getSize());            }            dfsFile.setAccessUrl(gitEggDfsFile.getFileUrl());        } catch (IOException e) {            log.error("{}文件上传失败", e);            dfsFile.setFileStatus(GitEggConstant.DISABLE);            dfsFile.setComments(String.valueOf(e));        } finally {            dfsFileService.save(dfsFile);        }        return gitEggDfsFile;    }    @Override    public String getFileUrl(String dfsCode, String fileName) {        String fileUrl = null;        QueryDfsDTO queryDfsDTO = new QueryDfsDTO();        DfsDTO dfsDTO = null;        // 若上传时没有选择存储方式,然后取默认存储模式        if(StringUtils.isEmpty(dfsCode)) {            queryDfsDTO.setDfsDefault(GitEggConstant.ENABLE);        }        else {            queryDfsDTO.setDfsCode(dfsCode);        }        try {            dfsDTO = dfsService.queryDfs(queryDfsDTO);            IDfsBaseService dfsFileService = dfsFactory.getDfsBaseService(dfsDTO);            fileUrl = dfsFileService.getFileUrl(fileName);        }        catch (Exception e)        {            log.error("{}文件上传失败", e);        }        return fileUrl;    }    @Override    public OutputStream downloadFile(String dfsCode, String fileName, OutputStream outputStream) {        QueryDfsDTO queryDfsDTO = new QueryDfsDTO();        DfsDTO dfsDTO = null;        // 若上传时没有选择存储方式,然后取默认存储模式        if(StringUtils.isEmpty(dfsCode)) {            queryDfsDTO.setDfsDefault(GitEggConstant.ENABLE);        }        else {            queryDfsDTO.setDfsCode(dfsCode);        }        try {            dfsDTO = dfsService.queryDfs(queryDfsDTO);            IDfsBaseService dfsFileService = dfsFactory.getDfsBaseService(dfsDTO);            outputStream = dfsFileService.getFileObject(fileName, outputStream);        }        catch (Exception e)        {            log.error("{}文件上传失败", e);        }        return outputStream;    }}

7、新建Gitegdfscontroller用于上传和下载通用访问控制器

@RestController@RequestMapping("/extension")@RequiredArgsConstructor(onConstructor_ = @Autowired)@Api(value = "Gitegdfscontroler")@RefreshScopepublic class GitEggDfsController {    private final IGitEggDfsService gitEggDfsService;    /**     * 上传文件     * @param uploadFile     * @param dfsCode     * @return     */    @PostMapping("/upload/file")    public Result<?> uploadFile(@RequestParam("uploadFile") MultipartFile[] uploadFile, String dfsCode) {        GitEggDfsFile gitEggDfsFile = null;        if (ArrayUtils.isNotEmpty(uploadFile))        {            for (MultipartFile file : uploadFile) {                gitEggDfsFile = gitEggDfsService.uploadFile(dfsCode, file);            }        }        return Result.data(gitEggDfsFile);    }    /**     * 通过文件名获取文件访问链接     */    @GetMapping("/get/file/url")    @ApiOperation(value = "查询分布式存储配置表详细信息")    public Result<?> query(String dfsCode, String fileName) {        String fileUrl = gitEggDfsService.getFileUrl(dfsCode, fileName);        return Result.data(fileUrl);    }    /**     * 以文件流的形式下载文件     */    @GetMapping("/get/file/download")    public void downloadFile(HttpServletResponse response,HttpServletRequest request,String dfsCode, String fileName) {        if (fileName != null) {            response.setCharacterEncoding(request.getCharacterEncoding());            response.setContentType("application/octet-stream");            response.addHeader("Content-Disposition", "attachment;fileName=" + fileName);            OutputStream os = null;            try {                os = response.getOutputStream();                os = gitEggDfsService.downloadFile(dfsCode, fileName, os);                os.flush();                os.close();            } catch (Exception e) {                e.printStackTrace();            } finally {                if (os != null) {                    try {                        os.close();                    } catch (IOException e) {                        e.printStackTrace();                    }                }            }        }    }}

8、实现前端上传和下载。请注意,当axios要求下载文件流时,需要设置 responseType: 'blob'

  • 上传
            handleUploadTest (row) {                this.fileList = []                this.uploading = false                this.uploadForm.dfsType = row.dfsType                this.uploadForm.dfsCode = row.dfsCode                this.uploadForm.uploadFile = null                this.dialogTestUploadVisible = true            },            handleRemove (file) {                const index = this.fileList.indexOf(file)                const newFileList = this.fileList.slice()                newFileList.splice(index, 1)                this.fileList = newFileList            },            beforeUpload (file) {                this.fileList = [...this.fileList, file]                return false            },            handleUpload () {                this.uploadedFileName = ''                const { fileList } = this                const formData = new FormData()                formData.append('dfsCode', this.uploadForm.dfsCode)                fileList.forEach(file => {                  formData.append('uploadFile', file)                })                this.uploading = true                dfsUpload(formData).then(() => {                    this.fileList = []                    this.uploading = false                    this.$message.success(上传成功)                }).catch(err => {                  console.log('uploading', err)                  this.$message.error(上传失败)                })            }
  • 下载
            getFileUrl (row) {                this.listLoading = true                this.fileDownload.dfsCode = row.dfsCode                this.fileDownload.fileName = row.fileName                dfsGetFileUrl(this.fileDownload).then(response => {                    window.open(response.data)                    this.listLoading = false                })            },            downLoadFile (row) {                this.listLoading = true                this.fileDownload.dfsCode = row.dfsCode                this.fileDownload.fileName = row.fileName                this.fileDownload.responseType = 'blob'                dfsDownloadFileUrl(this.fileDownload).then(response => {                    const blob = new Blob([response.data])                    var fileName = row.originalName                    const elink = document.createElement('a')                    elink.download = fileName                    elink.style.display = 'none'                    elink.href = URL.createObjectURL(blob)                    document.body.appendChild(elink)                    elink.click()                    URL.revokeObjectURL(elink.href)                    document.body.removeChild(elink)                    this.listLoading = false                })            }
  • 前端接口
import request from '@/utils/request'export function dfsUpload (formData) {    return request({      url: '/gitegg-service-extension/extension/upload/file',      method: 'post',      data: formData    })}export function dfsGetFileUrl (query) {    return request({      url: '/gitegg-service-extension/extension/get/file/url',      method: 'get',      params: query    })}export function dfsDownloadFileUrl (query) {  return request({    url: '/gitegg-service-extension/extension/get/file/download',    method: 'get',    responseType: 'blob',    params: query  })}
三、功能测试界面

1、批量上传上传界面2、文件流下载并获取文件地址文件流下载及获取文件地址

备注:

1、防止文件名重复。在这里,文件名统一采用七牛云的hash算法,可以防止文件重复。如果需要在界面中显示文件名,则存储在数据库中的文件名字段中进行显示。所有上传文件都有记录。