@Override public void timeseriesAggregate( RpcController controller, AggregateProtos.TimeSeriesAggregateRequest request, RpcCallback<AggregateProtos.AggregateResult> done) { AggregateResult result = null; try { result = this.aggregate( ProtoBufConverter.fromPBEntityDefinition(request.getEntityDefinition()), ProtoBufConverter.fromPBScan(request.getScan()), ProtoBufConverter.fromPBStringList(request.getGroupbyFieldsList()), ProtoBufConverter.fromPBByteArrayList(request.getAggregateFuncTypesList()), ProtoBufConverter.fromPBStringList(request.getAggregatedFieldsList()), request.getStartTime(), request.getEndTime(), request.getIntervalMin()); } catch (IOException e) { LOG.error("Failed to convert result to PB-based message", e); ResponseConverter.setControllerException(controller, e); } try { done.run(ProtoBufConverter.toPBAggregateResult(result)); } catch (IOException e) { LOG.error("Failed to convert result to PB-based message", e); ResponseConverter.setControllerException(controller, e); } }
@Override public void cleanupBulkLoad( RpcController controller, CleanupBulkLoadRequest request, RpcCallback<CleanupBulkLoadResponse> done) { try { List<BulkLoadObserver> bulkLoadObservers = getBulkLoadObservers(); if (bulkLoadObservers != null) { ObserverContext<RegionCoprocessorEnvironment> ctx = new ObserverContext<RegionCoprocessorEnvironment>(); ctx.prepare(env); for (BulkLoadObserver bulkLoadObserver : bulkLoadObservers) { bulkLoadObserver.preCleanupBulkLoad(ctx, request); } } fs.delete(new Path(request.getBulkToken()), true); done.run(CleanupBulkLoadResponse.newBuilder().build()); } catch (IOException e) { ResponseConverter.setControllerException(controller, e); } done.run(null); }
@Override public void prepareBulkLoad( RpcController controller, PrepareBulkLoadRequest request, RpcCallback<PrepareBulkLoadResponse> done) { try { List<BulkLoadObserver> bulkLoadObservers = getBulkLoadObservers(); if (bulkLoadObservers != null) { ObserverContext<RegionCoprocessorEnvironment> ctx = new ObserverContext<RegionCoprocessorEnvironment>(); ctx.prepare(env); for (BulkLoadObserver bulkLoadObserver : bulkLoadObservers) { bulkLoadObserver.prePrepareBulkLoad(ctx, request); } } String bulkToken = createStagingDir( baseStagingDir, getActiveUser(), ProtobufUtil.toTableName(request.getTableName())) .toString(); done.run(PrepareBulkLoadResponse.newBuilder().setBulkToken(bulkToken).build()); } catch (IOException e) { ResponseConverter.setControllerException(controller, e); } done.run(null); }
@Override public void getAuthenticationToken( RpcController controller, AuthenticationProtos.GetAuthenticationTokenRequest request, RpcCallback<AuthenticationProtos.GetAuthenticationTokenResponse> done) { AuthenticationProtos.GetAuthenticationTokenResponse.Builder response = AuthenticationProtos.GetAuthenticationTokenResponse.newBuilder(); try { if (secretManager == null) { throw new IOException("No secret manager configured for token authentication"); } User currentUser = RequestContext.getRequestUser(); UserGroupInformation ugi = null; if (currentUser != null) { ugi = currentUser.getUGI(); } if (currentUser == null) { throw new AccessDeniedException("No authenticated user for request!"); } else if (!isAllowedDelegationTokenOp(ugi)) { LOG.warn( "Token generation denied for user="******", authMethod=" + ugi.getAuthenticationMethod()); throw new AccessDeniedException( "Token generation only allowed for Kerberos authenticated clients"); } Token<AuthenticationTokenIdentifier> token = secretManager.generateToken(currentUser.getName()); response.setToken(ProtobufUtil.toToken(token)).build(); } catch (IOException ioe) { ResponseConverter.setControllerException(controller, ioe); } done.run(response.build()); }
@Override public void secureBulkLoadHFiles( RpcController controller, SecureBulkLoadHFilesRequest request, RpcCallback<SecureBulkLoadHFilesResponse> done) { final List<Pair<byte[], String>> familyPaths = new ArrayList<Pair<byte[], String>>(); for (ClientProtos.BulkLoadHFileRequest.FamilyPath el : request.getFamilyPathList()) { familyPaths.add(new Pair(el.getFamily().toByteArray(), el.getPath())); } Token userToken = null; if (userProvider.isHadoopSecurityEnabled()) { userToken = new Token( request.getFsToken().getIdentifier().toByteArray(), request.getFsToken().getPassword().toByteArray(), new Text(request.getFsToken().getKind()), new Text(request.getFsToken().getService())); } final String bulkToken = request.getBulkToken(); User user = getActiveUser(); final UserGroupInformation ugi = user.getUGI(); if (userToken != null) { ugi.addToken(userToken); } else if (userProvider.isHadoopSecurityEnabled()) { // we allow this to pass through in "simple" security mode // for mini cluster testing ResponseConverter.setControllerException( controller, new DoNotRetryIOException("User token cannot be null")); done.run(SecureBulkLoadHFilesResponse.newBuilder().setLoaded(false).build()); return; } HRegion region = env.getRegion(); boolean bypass = false; if (region.getCoprocessorHost() != null) { try { bypass = region.getCoprocessorHost().preBulkLoadHFile(familyPaths); } catch (IOException e) { ResponseConverter.setControllerException(controller, e); done.run(SecureBulkLoadHFilesResponse.newBuilder().setLoaded(false).build()); return; } } boolean loaded = false; if (!bypass) { // Get the target fs (HBase region server fs) delegation token // Since we have checked the permission via 'preBulkLoadHFile', now let's give // the 'request user' necessary token to operate on the target fs. // After this point the 'doAs' user will hold two tokens, one for the source fs // ('request user'), another for the target fs (HBase region server principal). if (userProvider.isHadoopSecurityEnabled()) { FsDelegationToken targetfsDelegationToken = new FsDelegationToken(userProvider, "renewer"); try { targetfsDelegationToken.acquireDelegationToken(fs); } catch (IOException e) { ResponseConverter.setControllerException(controller, e); done.run(SecureBulkLoadHFilesResponse.newBuilder().setLoaded(false).build()); return; } Token<?> targetFsToken = targetfsDelegationToken.getUserToken(); if (targetFsToken != null && (userToken == null || !targetFsToken.getService().equals(userToken.getService()))) { ugi.addToken(targetFsToken); } } loaded = ugi.doAs( new PrivilegedAction<Boolean>() { @Override public Boolean run() { FileSystem fs = null; try { Configuration conf = env.getConfiguration(); fs = FileSystem.get(conf); for (Pair<byte[], String> el : familyPaths) { Path p = new Path(el.getSecond()); Path stageFamily = new Path(bulkToken, Bytes.toString(el.getFirst())); if (!fs.exists(stageFamily)) { fs.mkdirs(stageFamily); fs.setPermission(stageFamily, PERM_ALL_ACCESS); } } // We call bulkLoadHFiles as requesting user // To enable access prior to staging return env.getRegion() .bulkLoadHFiles( familyPaths, true, new SecureBulkLoadListener(fs, bulkToken, conf)); } catch (Exception e) { LOG.error("Failed to complete bulk load", e); } return false; } }); } if (region.getCoprocessorHost() != null) { try { loaded = region.getCoprocessorHost().postBulkLoadHFile(familyPaths, loaded); } catch (IOException e) { ResponseConverter.setControllerException(controller, e); done.run(SecureBulkLoadHFilesResponse.newBuilder().setLoaded(false).build()); return; } } done.run(SecureBulkLoadHFilesResponse.newBuilder().setLoaded(loaded).build()); }