org.apache.hadoop.io.nativeio.NativeIO$Windows.access0问题的解决

这篇具有很好参考价值的文章主要介绍了org.apache.hadoop.io.nativeio.NativeIO$Windows.access0问题的解决。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

 运行代码出现如下错误:

 org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z

需在java项目中重新创建一个包,将NativeIO.java进行重写

包名必须是org.apache.hadoop.io.nativeio

org.apache.hadoop.io.nativeio.NativeIO$Windows.access0问题的解决,hadoop,apache,linux

 

org.apache.hadoop.io.nativeio.NativeIO$Windows.access0问题的解决,hadoop,apache,linux

 

修改之后,重新运行项目即可.

NativeIO.java代码如下文章来源地址https://www.toymoban.com/news/detail-726054.html

//
// Source code recreated from a .class file by IntelliJ IDEA
// (powered by FernFlower decompiler)
//

package org.apache.hadoop.io.nativeio;

import com.google.common.annotations.VisibleForTesting;
import java.io.Closeable;
import java.io.File;
import java.io.FileDescriptor;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.lang.reflect.Field;
import java.nio.ByteBuffer;
import java.nio.MappedByteBuffer;
import java.nio.channels.FileChannel;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import org.apache.hadoop.classification.InterfaceAudience.Private;
import org.apache.hadoop.classification.InterfaceStability.Unstable;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.HardLink;
import org.apache.hadoop.fs.PathIOException;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.io.SecureIOUtils.AlreadyExistsException;
import org.apache.hadoop.util.NativeCodeLoader;
import org.apache.hadoop.util.PerformanceAdvisory;
import org.apache.hadoop.util.Shell;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import sun.misc.Cleaner;
import sun.misc.Unsafe;
import sun.nio.ch.DirectBuffer;

@Private
@Unstable
public class NativeIO {
    private static boolean workaroundNonThreadSafePasswdCalls = false;
    private static final Logger LOG = LoggerFactory.getLogger(NativeIO.class);
    private static boolean nativeLoaded = false;
    private static final Map<Long, NativeIO.CachedUid> uidCache;
    private static long cacheTimeout;
    private static boolean initialized;

    public NativeIO() {
    }

    public static boolean isAvailable() {
        return NativeCodeLoader.isNativeCodeLoaded() && nativeLoaded;
    }

    private static native void initNative();

    static long getMemlockLimit() {
        return isAvailable() ? getMemlockLimit0() : 0L;
    }

    private static native long getMemlockLimit0();

    static long getOperatingSystemPageSize() {
        try {
            Field f = Unsafe.class.getDeclaredField("theUnsafe");
            f.setAccessible(true);
            Unsafe unsafe = (Unsafe)f.get((Object)null);
            return (long)unsafe.pageSize();
        } catch (Throwable var2) {
            LOG.warn("Unable to get operating system page size.  Guessing 4096.", var2);
            return 4096L;
        }
    }

    private static String stripDomain(String name) {
        int i = name.indexOf(92);
        if (i != -1) {
            name = name.substring(i + 1);
        }

        return name;
    }

    public static String getOwner(FileDescriptor fd) throws IOException {
        ensureInitialized();
        if (Shell.WINDOWS) {
            String owner = NativeIO.Windows.getOwner(fd);
            owner = stripDomain(owner);
            return owner;
        } else {
            long uid = NativeIO.POSIX.getUIDforFDOwnerforOwner(fd);
            NativeIO.CachedUid cUid = (NativeIO.CachedUid)uidCache.get(uid);
            long now = System.currentTimeMillis();
            if (cUid != null && cUid.timestamp + cacheTimeout > now) {
                return cUid.username;
            } else {
                String user = NativeIO.POSIX.getUserName(uid);
                LOG.info("Got UserName " + user + " for UID " + uid + " from the native implementation");
                cUid = new NativeIO.CachedUid(user, now);
                uidCache.put(uid, cUid);
                return user;
            }
        }
    }

    public static FileDescriptor getShareDeleteFileDescriptor(File f, long seekOffset) throws IOException {
        if (!Shell.WINDOWS) {
            RandomAccessFile rf = new RandomAccessFile(f, "r");
            if (seekOffset > 0L) {
                rf.seek(seekOffset);
            }

            return rf.getFD();
        } else {
            FileDescriptor fd = NativeIO.Windows.createFile(f.getAbsolutePath(), 2147483648L, 7L, 3L);
            if (seekOffset > 0L) {
                NativeIO.Windows.setFilePointer(fd, seekOffset, 0L);
            }

            return fd;
        }
    }

    public static FileOutputStream getCreateForWriteFileOutputStream(File f, int permissions) throws IOException {
        FileDescriptor fd;
        if (!Shell.WINDOWS) {
            try {
                fd = NativeIO.POSIX.open(f.getAbsolutePath(), NativeIO.POSIX.O_WRONLY | NativeIO.POSIX.O_CREAT | NativeIO.POSIX.O_EXCL, permissions);
                return new FileOutputStream(fd);
            } catch (NativeIOException var4) {
                if (var4.getErrno() == Errno.EEXIST) {
                    throw new AlreadyExistsException(var4);
                } else {
                    throw var4;
                }
            }
        } else {
            try {
                fd = NativeIO.Windows.createFile(f.getCanonicalPath(), 1073741824L, 7L, 1L);
                NativeIO.POSIX.chmod(f.getCanonicalPath(), permissions);
                return new FileOutputStream(fd);
            } catch (NativeIOException var3) {
                if (var3.getErrorCode() == 80L) {
                    throw new AlreadyExistsException(var3);
                } else {
                    throw var3;
                }
            }
        }
    }

    private static synchronized void ensureInitialized() {
        if (!initialized) {
            cacheTimeout = (new Configuration()).getLong("hadoop.security.uid.cache.secs", 14400L) * 1000L;
            LOG.info("Initialized cache for UID to User mapping with a cache timeout of " + cacheTimeout / 1000L + " seconds.");
            initialized = true;
        }

    }

    public static void renameTo(File src, File dst) throws IOException {
        if (!nativeLoaded) {
            if (!src.renameTo(dst)) {
                throw new IOException("renameTo(src=" + src + ", dst=" + dst + ") failed.");
            }
        } else {
            renameTo0(src.getAbsolutePath(), dst.getAbsolutePath());
        }

    }

    /** @deprecated */
    @Deprecated
    public static void link(File src, File dst) throws IOException {
        if (!nativeLoaded) {
            HardLink.createHardLink(src, dst);
        } else {
            link0(src.getAbsolutePath(), dst.getAbsolutePath());
        }

    }

    private static native void renameTo0(String var0, String var1) throws NativeIOException;

    private static native void link0(String var0, String var1) throws NativeIOException;

    public static void copyFileUnbuffered(File src, File dst) throws IOException {
        if (nativeLoaded && Shell.WINDOWS) {
            copyFileUnbuffered0(src.getAbsolutePath(), dst.getAbsolutePath());
        } else {
            FileInputStream fis = new FileInputStream(src);
            FileChannel input = null;

            try {
                input = fis.getChannel();
                FileOutputStream fos = new FileOutputStream(dst);
                Throwable var5 = null;

                try {
                    FileChannel output = fos.getChannel();
                    Throwable var7 = null;

                    try {
                        long remaining = input.size();
                        long position = 0L;

                        for(long transferred = 0L; remaining > 0L; position += transferred) {
                            transferred = input.transferTo(position, remaining, output);
                            remaining -= transferred;
                        }
                    } catch (Throwable var47) {
                        var7 = var47;
                        throw var47;
                    } finally {
                        if (output != null) {
                            if (var7 != null) {
                                try {
                                    output.close();
                                } catch (Throwable var46) {
                                    var7.addSuppressed(var46);
                                }
                            } else {
                                output.close();
                            }
                        }

                    }
                } catch (Throwable var49) {
                    var5 = var49;
                    throw var49;
                } finally {
                    if (fos != null) {
                        if (var5 != null) {
                            try {
                                fos.close();
                            } catch (Throwable var45) {
                                var5.addSuppressed(var45);
                            }
                        } else {
                            fos.close();
                        }
                    }

                }
            } finally {
                IOUtils.cleanupWithLogger(LOG, new Closeable[]{input, fis});
            }
        }

    }

    private static native void copyFileUnbuffered0(String var0, String var1) throws NativeIOException;

    static {
        if (NativeCodeLoader.isNativeCodeLoaded()) {
            try {
                initNative();
                nativeLoaded = true;
            } catch (Throwable var1) {
                PerformanceAdvisory.LOG.debug("Unable to initialize NativeIO libraries", var1);
            }
        }

        uidCache = new ConcurrentHashMap();
        initialized = false;
    }

    private static class CachedUid {
        final long timestamp;
        final String username;

        public CachedUid(String username, long timestamp) {
            this.timestamp = timestamp;
            this.username = username;
        }
    }

    public static class Windows {
        public static final long GENERIC_READ = 2147483648L;
        public static final long GENERIC_WRITE = 1073741824L;
        public static final long FILE_SHARE_READ = 1L;
        public static final long FILE_SHARE_WRITE = 2L;
        public static final long FILE_SHARE_DELETE = 4L;
        public static final long CREATE_NEW = 1L;
        public static final long CREATE_ALWAYS = 2L;
        public static final long OPEN_EXISTING = 3L;
        public static final long OPEN_ALWAYS = 4L;
        public static final long TRUNCATE_EXISTING = 5L;
        public static final long FILE_BEGIN = 0L;
        public static final long FILE_CURRENT = 1L;
        public static final long FILE_END = 2L;
        public static final long FILE_ATTRIBUTE_NORMAL = 128L;

        public Windows() {
        }

        public static void createDirectoryWithMode(File path, int mode) throws IOException {
            createDirectoryWithMode0(path.getAbsolutePath(), mode);
        }

        private static native void createDirectoryWithMode0(String var0, int var1) throws NativeIOException;

        public static native FileDescriptor createFile(String var0, long var1, long var3, long var5) throws IOException;

        public static FileOutputStream createFileOutputStreamWithMode(File path, boolean append, int mode) throws IOException {
            long desiredAccess = 1073741824L;
            long shareMode = 3L;
            long creationDisposition = append ? 4L : 2L;
            return new FileOutputStream(createFileWithMode0(path.getAbsolutePath(), desiredAccess, shareMode, creationDisposition, mode));
        }

        private static native FileDescriptor createFileWithMode0(String var0, long var1, long var3, long var5, int var7) throws NativeIOException;

        public static native long setFilePointer(FileDescriptor var0, long var1, long var3) throws IOException;

        private static native String getOwner(FileDescriptor var0) throws IOException;

        private static native boolean access0(String var0, int var1);

        public static boolean access(String path, NativeIO.Windows.AccessRight desiredAccess) throws IOException {
          //  return access0(path, desiredAccess.accessRight());
            return  true;
        }

        public static native void extendWorkingSetSize(long var0) throws IOException;

        static {
            if (NativeCodeLoader.isNativeCodeLoaded()) {
                try {
                    NativeIO.initNative();
                    NativeIO.nativeLoaded = true;
                } catch (Throwable var1) {
                    PerformanceAdvisory.LOG.debug("Unable to initialize NativeIO libraries", var1);
                }
            }

        }

        public static enum AccessRight {
            ACCESS_READ(1),
            ACCESS_WRITE(2),
            ACCESS_EXECUTE(32);

            private final int accessRight;

            private AccessRight(int access) {
                this.accessRight = access;
            }

            public int accessRight() {
                return this.accessRight;
            }
        }
    }

    public static class POSIX {
        public static int O_RDONLY = -1;
        public static int O_WRONLY = -1;
        public static int O_RDWR = -1;
        public static int O_CREAT = -1;
        public static int O_EXCL = -1;
        public static int O_NOCTTY = -1;
        public static int O_TRUNC = -1;
        public static int O_APPEND = -1;
        public static int O_NONBLOCK = -1;
        public static int O_SYNC = -1;
        public static int POSIX_FADV_NORMAL = -1;
        public static int POSIX_FADV_RANDOM = -1;
        public static int POSIX_FADV_SEQUENTIAL = -1;
        public static int POSIX_FADV_WILLNEED = -1;
        public static int POSIX_FADV_DONTNEED = -1;
        public static int POSIX_FADV_NOREUSE = -1;
        public static int SYNC_FILE_RANGE_WAIT_BEFORE = 1;
        public static int SYNC_FILE_RANGE_WRITE = 2;
        public static int SYNC_FILE_RANGE_WAIT_AFTER = 4;
        private static final Logger LOG = LoggerFactory.getLogger(NativeIO.class);
        public static boolean fadvisePossible = false;
        private static boolean nativeLoaded = false;
        private static boolean syncFileRangePossible = true;
        static final String WORKAROUND_NON_THREADSAFE_CALLS_KEY = "hadoop.workaround.non.threadsafe.getpwuid";
        static final boolean WORKAROUND_NON_THREADSAFE_CALLS_DEFAULT = true;
        private static long cacheTimeout = -1L;
        private static NativeIO.POSIX.CacheManipulator cacheManipulator = new NativeIO.POSIX.CacheManipulator();
        private static final Map<Integer, NativeIO.POSIX.CachedName> USER_ID_NAME_CACHE;
        private static final Map<Integer, NativeIO.POSIX.CachedName> GROUP_ID_NAME_CACHE;
        public static final int MMAP_PROT_READ = 1;
        public static final int MMAP_PROT_WRITE = 2;
        public static final int MMAP_PROT_EXEC = 4;

        public POSIX() {
        }

        public static NativeIO.POSIX.CacheManipulator getCacheManipulator() {
            return cacheManipulator;
        }

        public static void setCacheManipulator(NativeIO.POSIX.CacheManipulator cacheManipulator) {
            NativeIO.POSIX.cacheManipulator = cacheManipulator;
        }

        public static boolean isAvailable() {
            return NativeCodeLoader.isNativeCodeLoaded() && nativeLoaded;
        }

        private static void assertCodeLoaded() throws IOException {
            if (!isAvailable()) {
                throw new IOException("NativeIO was not loaded");
            }
        }

        public static native FileDescriptor open(String var0, int var1, int var2) throws IOException;

        private static native NativeIO.POSIX.Stat fstat(FileDescriptor var0) throws IOException;

        private static native NativeIO.POSIX.Stat stat(String var0) throws IOException;

        private static native void chmodImpl(String var0, int var1) throws IOException;

        public static void chmod(String path, int mode) throws IOException {
            if (!Shell.WINDOWS) {
                chmodImpl(path, mode);
            } else {
                try {
                    chmodImpl(path, mode);
                } catch (NativeIOException var3) {
                    if (var3.getErrorCode() == 3L) {
                        throw new NativeIOException("No such file or directory", Errno.ENOENT);
                    }

                    LOG.warn(String.format("NativeIO.chmod error (%d): %s", var3.getErrorCode(), var3.getMessage()));
                    throw new NativeIOException("Unknown error", Errno.UNKNOWN);
                }
            }

        }

        static native void posix_fadvise(FileDescriptor var0, long var1, long var3, int var5) throws NativeIOException;

        static native void sync_file_range(FileDescriptor var0, long var1, long var3, int var5) throws NativeIOException;

        static void posixFadviseIfPossible(String identifier, FileDescriptor fd, long offset, long len, int flags) throws NativeIOException {
            if (nativeLoaded && fadvisePossible) {
                try {
                    posix_fadvise(fd, offset, len, flags);
                } catch (UnsatisfiedLinkError var8) {
                    fadvisePossible = false;
                }
            }

        }

        public static void syncFileRangeIfPossible(FileDescriptor fd, long offset, long nbytes, int flags) throws NativeIOException {
            if (nativeLoaded && syncFileRangePossible) {
                try {
                    sync_file_range(fd, offset, nbytes, flags);
                } catch (UnsupportedOperationException var7) {
                    syncFileRangePossible = false;
                } catch (UnsatisfiedLinkError var8) {
                    syncFileRangePossible = false;
                }
            }

        }

        static native void mlock_native(ByteBuffer var0, long var1) throws NativeIOException;

        static void mlock(ByteBuffer buffer, long len) throws IOException {
            assertCodeLoaded();
            if (!buffer.isDirect()) {
                throw new IOException("Cannot mlock a non-direct ByteBuffer");
            } else {
                mlock_native(buffer, len);
            }
        }

        public static void munmap(MappedByteBuffer buffer) {
            if (buffer instanceof DirectBuffer) {
                Cleaner cleaner = ((DirectBuffer)buffer).cleaner();
                cleaner.clean();
            }

        }

        private static native long getUIDforFDOwnerforOwner(FileDescriptor var0) throws IOException;

        private static native String getUserName(long var0) throws IOException;

        public static NativeIO.POSIX.Stat getFstat(FileDescriptor fd) throws IOException {
            NativeIO.POSIX.Stat stat = null;
            if (!Shell.WINDOWS) {
                stat = fstat(fd);
                stat.owner = getName(NativeIO.POSIX.IdCache.USER, stat.ownerId);
                stat.group = getName(NativeIO.POSIX.IdCache.GROUP, stat.groupId);
            } else {
                try {
                    stat = fstat(fd);
                } catch (NativeIOException var3) {
                    if (var3.getErrorCode() == 6L) {
                        throw new NativeIOException("The handle is invalid.", Errno.EBADF);
                    }

                    LOG.warn(String.format("NativeIO.getFstat error (%d): %s", var3.getErrorCode(), var3.getMessage()));
                    throw new NativeIOException("Unknown error", Errno.UNKNOWN);
                }
            }

            return stat;
        }

        public static NativeIO.POSIX.Stat getStat(String path) throws IOException {
            if (path == null) {
                String errMessage = "Path is null";
                LOG.warn(errMessage);
                throw new IOException(errMessage);
            } else {
                NativeIO.POSIX.Stat stat = null;

                try {
                    if (!Shell.WINDOWS) {
                        stat = stat(path);
                        stat.owner = getName(NativeIO.POSIX.IdCache.USER, stat.ownerId);
                        stat.group = getName(NativeIO.POSIX.IdCache.GROUP, stat.groupId);
                    } else {
                        stat = stat(path);
                    }

                    return stat;
                } catch (NativeIOException var3) {
                    LOG.warn("NativeIO.getStat error ({}): {} -- file path: {}", new Object[]{var3.getErrorCode(), var3.getMessage(), path});
                    throw new PathIOException(path, var3);
                }
            }
        }

        private static String getName(NativeIO.POSIX.IdCache domain, int id) throws IOException {
            Map<Integer, NativeIO.POSIX.CachedName> idNameCache = domain == NativeIO.POSIX.IdCache.USER ? USER_ID_NAME_CACHE : GROUP_ID_NAME_CACHE;
            NativeIO.POSIX.CachedName cachedName = (NativeIO.POSIX.CachedName)idNameCache.get(id);
            long now = System.currentTimeMillis();
            String name;
            if (cachedName != null && cachedName.timestamp + cacheTimeout > now) {
                name = cachedName.name;
            } else {
                name = domain == NativeIO.POSIX.IdCache.USER ? getUserName(id) : getGroupName(id);
                if (LOG.isDebugEnabled()) {
                    String type = domain == NativeIO.POSIX.IdCache.USER ? "UserName" : "GroupName";
                    LOG.debug("Got " + type + " " + name + " for ID " + id + " from the native implementation");
                }

                cachedName = new NativeIO.POSIX.CachedName(name, now);
                idNameCache.put(id, cachedName);
            }

            return name;
        }

        static native String getUserName(int var0) throws IOException;

        static native String getGroupName(int var0) throws IOException;

        public static native long mmap(FileDescriptor var0, int var1, boolean var2, long var3) throws IOException;

        public static native void munmap(long var0, long var2) throws IOException;

        static {
            if (NativeCodeLoader.isNativeCodeLoaded()) {
                try {
                    Configuration conf = new Configuration();
                    NativeIO.workaroundNonThreadSafePasswdCalls = conf.getBoolean("hadoop.workaround.non.threadsafe.getpwuid", true);
                    NativeIO.initNative();
                    nativeLoaded = true;
                    cacheTimeout = conf.getLong("hadoop.security.uid.cache.secs", 14400L) * 1000L;
                    LOG.debug("Initialized cache for IDs to User/Group mapping with a  cache timeout of " + cacheTimeout / 1000L + " seconds.");
                } catch (Throwable var1) {
                    PerformanceAdvisory.LOG.debug("Unable to initialize NativeIO libraries", var1);
                }
            }

            USER_ID_NAME_CACHE = new ConcurrentHashMap();
            GROUP_ID_NAME_CACHE = new ConcurrentHashMap();
        }

        private static enum IdCache {
            USER,
            GROUP;

            private IdCache() {
            }
        }

        private static class CachedName {
            final long timestamp;
            final String name;

            public CachedName(String name, long timestamp) {
                this.name = name;
                this.timestamp = timestamp;
            }
        }

        public static class Stat {
            private int ownerId;
            private int groupId;
            private String owner;
            private String group;
            private int mode;
            public static int S_IFMT = -1;
            public static int S_IFIFO = -1;
            public static int S_IFCHR = -1;
            public static int S_IFDIR = -1;
            public static int S_IFBLK = -1;
            public static int S_IFREG = -1;
            public static int S_IFLNK = -1;
            public static int S_IFSOCK = -1;
            public static int S_ISUID = -1;
            public static int S_ISGID = -1;
            public static int S_ISVTX = -1;
            public static int S_IRUSR = -1;
            public static int S_IWUSR = -1;
            public static int S_IXUSR = -1;

            Stat(int ownerId, int groupId, int mode) {
                this.ownerId = ownerId;
                this.groupId = groupId;
                this.mode = mode;
            }

            Stat(String owner, String group, int mode) {
                if (!Shell.WINDOWS) {
                    this.owner = owner;
                } else {
                    this.owner = NativeIO.stripDomain(owner);
                }

                if (!Shell.WINDOWS) {
                    this.group = group;
                } else {
                    this.group = NativeIO.stripDomain(group);
                }

                this.mode = mode;
            }

            public String toString() {
                return "Stat(owner='" + this.owner + "', group='" + this.group + "', mode=" + this.mode + ")";
            }

            public String getOwner() {
                return this.owner;
            }

            public String getGroup() {
                return this.group;
            }

            public int getMode() {
                return this.mode;
            }
        }

        @VisibleForTesting
        public static class NoMlockCacheManipulator extends NativeIO.POSIX.CacheManipulator {
            public NoMlockCacheManipulator() {
            }

            public void mlock(String identifier, ByteBuffer buffer, long len) throws IOException {
                NativeIO.POSIX.LOG.info("mlocking " + identifier);
            }

            public long getMemlockLimit() {
                return 1125899906842624L;
            }

            public long getOperatingSystemPageSize() {
                return 4096L;
            }

            public boolean verifyCanMlock() {
                return true;
            }
        }

        @VisibleForTesting
        public static class CacheManipulator {
            public CacheManipulator() {
            }

            public void mlock(String identifier, ByteBuffer buffer, long len) throws IOException {
                NativeIO.POSIX.mlock(buffer, len);
            }

            public long getMemlockLimit() {
                return NativeIO.getMemlockLimit();
            }

            public long getOperatingSystemPageSize() {
                return NativeIO.getOperatingSystemPageSize();
            }

            public void posixFadviseIfPossible(String identifier, FileDescriptor fd, long offset, long len, int flags) throws NativeIOException {
                NativeIO.POSIX.posixFadviseIfPossible(identifier, fd, offset, len, flags);
            }

            public boolean verifyCanMlock() {
                return NativeIO.isAvailable();
            }
        }
    }
}

到了这里,关于org.apache.hadoop.io.nativeio.NativeIO$Windows.access0问题的解决的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • eclipse和hadoop连接攻略(详细) Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Wi

    吸取前人教训,写下此稿 笔者所用到的软件版本: hadoop 2.7.3 hadoop-eclipse-plugin-2.7.3.jar eclipse-java-2020-06-R-win32-x86_64 先从虚拟机下载hadoop 需要解压好和文件配置好的版本,方便后文配置伪分布式文件)  笔者linux的hadoop目录为:/usr/hadoop 下载到windows的某个目录,自行选择 笔者下

    2024年02月02日
    浏览(123)
  • 【Flink】ClassNotFoundException: org.apache.hadoop.conf.Configuration

    问题背景 在Flink的sql-client客户端中执行连接hive的sql代码时出现如下错误,版本Flink1.13.6 Flink SQL  create catalog test with(  \\\'type\\\'=\\\'hive\\\', \\\'default-database\\\'=\\\'default\\\', \\\'hive-conf-dir\\\'=\\\'/opt/hive/conf\\\'); [ERROR] Could not execute SQL statement. Reason: java.lang.ClassNotFoundException: org.apache.hadoop.conf.Configuration 问题

    2024年02月21日
    浏览(39)
  • Hadoop datanode启动异常 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode

    现象 线上收到hadoop集群datanode掉线告警。 排查 1、确认datanode状态 发现未存在datanode进程。 2、尝试单独启动datanode,发现还是不行,错误日志信息如下: $ hadoop-daemon.sh start datanode 2022-11-25 15:58:43,267 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanod

    2023年04月08日
    浏览(49)
  • Hive报错org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask

    报错Error while compiling statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask 执行Hive两个表JOIN时出现如上错误 报错原因: 执行的join是大表和小表进性join,而Hive默认开启了MapJoin,即:hive.auto.convert.join=true; 但集群机器内存不够,导致出错。 Map Join

    2024年02月12日
    浏览(63)
  • ERROR:org.apache.hadoop.hbase.PleaseHoldException: Master is initializing错误

    重新安装hbase后,在hbase shell中查看所有命名空间时,出现了ERROR:org.apache.hadoop.hbase.PleaseHoldException: Master is initializing错误。     1、root用户下,关闭hbase stop-hbase.sh  2、执行以下命令删除HDFS下的hbase数据。 hadoop fs -rm -r /hbase  3、将zookeeper客户端下的hbase文件也删除。  1、sh z

    2023年04月14日
    浏览(46)
  • 解决java: 程序包org.apache.hadoop.conf不存在的一种冷门思路

    我们在用idea编译Hadoop项目时,明明已经导入了相关的jre依赖包,但是编译时提示包不存在 原因是我们还没在项目的 pom.xml 文件中配置相关的变量 找到它 打开之后长这样 最后一步,代码里面那个“2.6.0”是我的Hadoop版本,将它改成你自己安装的Hadoop的版本就行了,三个都要。

    2024年02月11日
    浏览(49)
  • hbase报错 ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing

    运行环境: hadoop3.2.3(伪分布式) jdk1.8 hbase2.5.5 解决方案: 1.删除hbase内置zookeeper信息 2.删除hdfs中hbase相关信息 重启hbase即可。

    2024年02月03日
    浏览(51)
  • hive查看数据库出现org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

    在启动hive后,使用show databses查看数据库时发现,出现了这个错误 根据搜索查找以及分析得知:可能是hive的数据库MySQL在安装的时候没有初始化,初始化数据库即可 schematool -dbType mysql -initSchema  1.在MySQL中删除元数据 drop database metastore; 2.进入hive中的bin里面 ,输入格式化命令

    2024年02月07日
    浏览(56)
  • 【kerberos】org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN,

    在用SUSE 操作系统安装 CM 大数据平台,在集群开启 kerberos 后,使用 HDFS 命令报错如下: SUSE Linux Enterprise Server 12 Service Pack 1 (SLES 12 SP5) 先进行认证 仔细看,在使用 klist 命令时,有个 Ticket Cache : Dir 他指向的路径是: /run/user/0/krb5cc/tkt 而在执行 HDFS 命令时,有个 KinitOptions cac

    2024年02月10日
    浏览(40)
  • 已解决org.apache.hadoop.hdfs.protocol.QuotaExceededException异常的正确解决方法,亲测有效!!!

    已解决org.apache.hadoop.hdfs.protocol.QuotaExceededException异常的正确解决方法,亲测有效!!! 目录 问题分析 报错原因 解决思路 解决方法 总结  博主v:XiaoMing_Java 在使用Hadoop分布式文件系统(HDFS)进行大数据存储和处理时,用户可能会遇到 org.apache.hadoop.hdfs.protocol.QuotaExceededExce

    2024年03月20日
    浏览(57)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包