Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
3.4.0
-
Reviewed
Description
I started to work on the YARN-10814 issue, and found this bug in the HttpFS. I investigated the problem and I already have some fix for it.
If the deprecated httpfs.authentication.signature.secret.file is not set in the configuration (e.g.: httpfs-site.xml) then the new hadoop.http.authentication.signature.secret.file config option won't be used, it will fallback to the random secret provider silently.
The HttpFSServerWebServer sets an authFilterConfigurationPrefix when building the server for the old path (httpfs.authentication.). Later the AuthenticationFilter.constructSecretProvider will immediately fallback to random, because the config won't contain the file. If the old path was set too, then it handled the file, and the provider was set to file type.
The configuration should be based on both the old and the new prefix filter, merging the two. The new config option should win in my opinion.
There is another issue in the HttpFSAuthenticationFilter, it is closely related.
If both config option is set then the HttpFSAuthenticationFilter will fail with an impossible file path (e.g.: ${httpfs.config.dir}/httpfs-signature.secret).
HttpFSAuthenticationFilter constructs the configuration, filtering first the new config prefix then the old prefix. The old prefix code works correctly, it uses the conf.get(key)
instead of the entry.getValue() which gives back the file path mentioned earlier. The code duplication can be eliminated and I think it would be better to change the order, first adding the config options from the old path then the new, and the new should overwrite the old values, with a warning log message.
Attachments
Issue Links
- relates to
-
HDFS-16240 Replace unshaded guava in HttpFSServerWebServer
- Resolved
- links to