Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

auth v2 not available anymore (which means - AWS Java SDK V2 cannot connect) #19720

Closed
radekapreel opened this issue May 10, 2024 · 7 comments
Closed

Comments

@radekapreel
Copy link

NOTE

If this case is urgent, please subscribe to Subnet so that our 24/7 support team may help you faster.

Expected Behavior

Ability to set auth to v2 via some configuration property.
The docs state that:

The process of verifying the identity of a connecting client. MinIO requires clients authenticate using [AWS Signature Version 4 protocol](https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html) with support for the deprecated Signature Version 2 protocol.

Current Behavior

Only v4 works. Or at least multiple different SO topics point to the fact that MinIO dosn't accept auth v2 and that's why I'm getting those errors:

  • The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. (Service: S3, Status Code: 400...
  • The specified bucket is not valid

And yes, that bucket is valid, because the MinIO JAVA client uses literally the same string as a bucket name and it works

Possible Solution

Bring back the v2 support

Steps to Reproduce (for bugs)

  1. Start minio with a docker compose:
version: '3'

services:
  minio:
    image: docker.io/bitnami/minio:latest
    ports:
      - '9000:9000'
      - '9001:9001'
    networks:
      - minionetwork
    volumes:
      - 'minio_data:/data'
    environment:
      - MINIO_ROOT_USER=your_username
      - MINIO_ROOT_PASSWORD=your_pasword
      - MINIO_DEFAULT_BUCKETS=test-minio-s3

networks:
  minionetwork:
    driver: bridge

volumes:
  minio_data:
    driver: local
  1. setup aws s3 java client v2
        S3Client client = S3Client.builder()
                .endpointOverride(URI.create(config.getUrl()))
                .httpClientBuilder(ApacheHttpClient.builder())
                .credentialsProvider(
                        StaticCredentialsProvider.create(
                                AwsBasicCredentials.builder()
                                        .accessKeyId(config.getUsername())
                                        .secretAccessKey(config.getPassword())
                                        .build()
                        )
                )
                .build();
  1. try to upload a document
        PutObjectRequest objectRequest = PutObjectRequest.builder()
                .bucket(config.getBucketName())
                .key(info.getPath())
                .build();

        InputStream bais = new BufferedInputStream(info.getContent());

        final var putObjectResponse = client.putObject(objectRequest,
                RequestBody.fromInputStream(bais, bais.available())
        );

        bais.close();
  1. Observe error:
software.amazon.awssdk.services.s3.model.S3Exception: The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. (Service: S3, Status Code: 400, Request ID: 17CE2050B24A0BDF)

Context

This guy already described this:
https://stackoverflow.com/questions/78444784/aws-java-sdk-2-putobject-minio-the-authorization-mechanism-you-have-provide

My question is: Is there any working example of a minio server working with a new version of AWS Java SDK (V2)?
There seems to be a problem with the V2 SDK using V2 auth by default and I didn't find any way to change that behavior

I'd be happy with either:
changing the client to work with auth v4
changing the server to work with auth v2

Regression

Your Environment

  • Version used 2024-05-10T01:41:38Z (taken from the UI)
  • Server setup and configuration: visible in the docker compose
  • Operating System and version: not relevant
@vadmeste
Copy link
Member

image: docker.io/bitnami/minio:latest

This image is not supported, can you test using minio/minio:latest image and minio/minio:RELEASE.2023-05-04T21-44-30Z ?

@harshavardhana
Copy link
Member

Share mc admin trace -v output

@ramondeklein
Copy link
Contributor

ramondeklein commented May 10, 2024

Which version of the AWS SDK for Java did you use? I wrote a small test that uses software.amazon.awssdk v2.25.49 (latest version at this time) and it works fine:

package java2;

import java.io.File;
import java.net.URI;
import software.amazon.awssdk.auth.credentials.*;
import software.amazon.awssdk.core.sync.RequestBody;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.*;

public class App {
    public static void main(String[] args) {
        S3Client client = S3Client.builder()
                .endpointOverride(URI.create("http://localhost:9000/"))
                .forcePathStyle(true)          // <-- THIS IS IMPORTANT
                .credentialsProvider(
                        StaticCredentialsProvider.create(
                                AwsBasicCredentials.builder()
                                        .accessKeyId("minioadmin")
                                        .secretAccessKey("minioadmin")
                                        .build()
                        )
                )
                .region(Region.US_EAST_1)
                .build();
            
        ListBucketsResponse resp = client.listBuckets();
        for (Bucket bucket : resp.buckets()) {
            System.out.println(bucket.name());
        }

        PutObjectRequest putRequest = PutObjectRequest.builder()
            .bucket("test")
            .key("my-test")
            .build();
        PutObjectResponse putResponse = client.putObject(putRequest, RequestBody.fromFile(new File("test-data")));
        System.out.println(putResponse.eTag());        
    }
}

It properly lists and prints all buckets (also tried it with the Bitnami image). It also uploaded the file without any issues.

@ramondeklein ramondeklein self-assigned this May 10, 2024
@ramondeklein
Copy link
Contributor

Did you include forcePathStyle(true) when creating the client? That's important, otherwise the AWS SDK will use the bucket-name in the hostname.

@radekapreel
Copy link
Author

Hey
I think that answering all other questions doesn't make sense since forcePathStyle worked.
It would be perfect to have some docs including this mandatory setting for connecting with AWS Client. The example above would be more than sufficient!
Or a different error message ;)

thank you for your answer

cheers!

ps. maybe a bit more context: the error was even more misleading, because I could list objects with no results (as in - the existing objects were not returned, but there was no error - simply 0 objects returned).

@harshavardhana
Copy link
Member

ps. maybe a bit more context: the error was even more misleading, because I could list objects with no results (as in - the existing objects were not returned, but there was no error - simply 0 objects returned).

The problem is your DNS somehow resolved for the bucket DNS, so there is no way for the server to return an error in such a situation since it will look like a valid call.

However, the reply would be ListBuckets(), not ListObjects(). The client is also silently ignoring this return, which is also a client bug.

We document our SDKs more clearly and updated, documenting AWS SDKs would take time we only do it as a courtesy for our users. It is not expected to be complete at any given time.

@ramondeklein
Copy link
Contributor

I also updated the StackOverflow answer to provide some more context. If the DNS name cannot be resolved, then the error message is correct (it reports that the hostname cannot be resolved).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants