首页 > 其他分享 >EvalAI使用——类似kaggle的开源平台,不过没有kernel fork功能,比较蛋疼

EvalAI使用——类似kaggle的开源平台,不过没有kernel fork功能,比较蛋疼

时间:2023-06-02 23:07:49浏览次数:64  
标签:fork kernel settings python EvalAI py user your

官方的代码 https://github.com/Cloud-CV/EvalAI 我一直没法成功import yaml配置举办比赛(create a challenge on EvalAI 使用https://github.com/Cloud-CV/EvalAI-Starters)。

 

直到使用第三方的fork: https://github.com/live-wire/EvalAI

 

下面是介绍的简单使用流程:

A question we’re often asked is: Doesn’t Kaggle already do this? The central differences are:

  • Custom Evaluation Protocols and Phases: We have designed versatile backend framework that can support user-defined evaluation metrics, various evaluation phases, private and public leaderboard.
  • Faster Evaluation: The backend evaluation pipeline is engineered so that submissions can be evaluated parallelly using multiple cores on multiple machines via mapreduce frameworks offering a significant performance boost over similar web AI-challenge platforms.
  • Portability: Since the platform is open-source, users have the freedom to host challenges on their own private servers rather than having to explicitly depend on Cloud Services such as AWS, Azure, etc.
  • Easy Hosting: Hosting a challenge is streamlined. One can create the challenge on EvalAI using the intuitive UI (work-in-progress) or using zip configuration file.
  • Centralized Leaderboard: Challenge Organizers whether host their challenge on EvalAI or forked version of EvalAI, they can send the results to main EvalAI server. This helps to build a centralized platform to keep track of different challenges.

Goal

Our ultimate goal is to build a centralized platform to host, participate and collaborate in AI challenges organized around the globe and we hope to help in benchmarking progress in AI.

Performance comparison

Some background: Last year, the Visual Question Answering Challenge (VQA) 2016 was hosted on some other platform, and on average evaluation would take ~10 minutes. EvalAI hosted this year's VQA Challenge 2017. This year, the dataset for the VQA Challenge 2017 is twice as large. Despite this, we’ve found that our parallelized backend only takes ~130 seconds to evaluate on the whole test set VQA 2.0 dataset.

Installation Instructions

Setting up EvalAI on your local machine is really easy. You can setup EvalAI using two methods:

Using Docker

You can also use Docker Compose to run all the components of EvalAI together. The steps are:

  1. Get the source code on to your machine via git.
git clone https://github.com/Cloud-CV/EvalAI.git evalai && cd evalai

Use your postgres username and password for fields USER and PASSWORD in settings/dev.py file.

  1. Build and run the Docker containers. This might take a while. You should be able to access EvalAI at localhost:8888.
docker-compose up --build

Using Virtual Environment

  1. Install python 2.7.10 or above, git, postgresql version >= 10.1, have ElasticMQ installed (Amazon SQS is used in production) and virtualenv, in your computer, if you don't have it already. If you are having trouble with postgresql on Windows check this link postgresqlhelp.
  2. Get the source code on your machine via git.
git clone https://github.com/Cloud-CV/EvalAI.git evalai
  1. Create a python virtual environment and install python dependencies.
cd evalai
virtualenv venv
source venv/bin/activate  # run this command everytime before working on project
pip install -r requirements/dev.txt
  1. Create an empty postgres database.
sudo -i -u (username)
createdb evalai
  1. Change Postgresql credentials in settings/dev.py and run migrations
    Use your postgres username and password for fields USER and PASSWORD in dev.py file. After changing credentials, run migrations using the following command:
python manage.py migrate --settings=settings.dev
  1. Seed the database with some fake data to work with.
python manage.py seed --settings=settings.dev

This command also creates a superuser(admin), a host user and a participant user with following credentials.

SUPERUSER- username: admin password: passwordHOST USER- username: host password: passwordPARTICIPANT USER- username: participant password: password

  1. That's it. Now you can run development server at http://127.0.0.1:8000 (for serving backend)
python manage.py runserver --settings=settings.dev
  1. Please make sure that node(>=7.x.x), npm(>=5.x.x) and bower(>=1.8.x) are installed globally on your machine.
    Install npm and bower dependencies by running
npm install
bower install

If you running npm install behind a proxy server, use

npm config set proxy http://proxy:port
  1. Now to connect to dev server at http://127.0.0.1:8888 (for serving frontend)
gulp dev:runserver
  1. That's it, Open web browser and hit the url http://127.0.0.1:8888.
  2. (Optional) If you want to see the whole game into play, then install the ElasticMQ Queue service and start the worker in a new terminal window using the following command that consumes the submissions done for every challenge:
python scripts/workers/submission_worker.py

 

注意:为了是新加的账户直接login并加入team,我修改了:
  575  vi accounts/permissions.py

from allauth.account.models import EmailAddress
from rest_framework import permissions


class HasVerifiedEmail(permissions.BasePermission):
    """
    Permission class for if the user has verified the email or not
    """

    message = "Please verify your email first!"

    def has_permission(self, request, view):

        if request.user.is_anonymous:
            return True
        else:
            print("*******************email verify removed!!!!")
            return True
            if EmailAddress.objects.filter(user=request.user, verified=True).exists():
                return True
            else:
                return False

 

使用docker运行:
  578  docker-compose up --build
然后就是漫长的等待。各种安装依赖,安装linux docker的东西。。。

最后访问localhost:8888即可。

 

标签:fork,kernel,settings,python,EvalAI,py,user,your
From: https://blog.51cto.com/u_11908275/6405413

相关文章

  • ERROR: Kernel configuration is invalid.
    最简单的linuxhello的驱动源程序//下面是驱动源代码#include<linux/init.h>#include<linux/module.h>staticinthello_init(void){printk(KERN_ALERT"Hello,TekkamanNinja!\n");return0;}staticvoidhello_exit(void){......
  • /proc/sys/kernel/sysrq /proc/sysrq-trigger----强制重启/触发器
    LINUX远程强制重启/proc/sys/kernel/sysrq/proc/sysrq-trigger----触发器ttp://blog.csdn.net/beckdon/article/details/41313713http://blog.csdn.net/chinaclock/article/details/50499530http://www.cnblogs.com/justin-y-lin/p/5424555.htmlhttps://www.cnblogs.com/yang......
  • Google Pixel 4 Android13 刷入Magisk + KernelSU 双root环境
    本文所有教程及源码、软件仅为技术研究。不涉及计算机信息系统功能的删除、修改、增加、干扰,更不会影响计算机信息系统的正常运行。不得将代码用于非法用途,如侵立删!GooglePixel4Android13刷入Magisk+KernelSU双root环境环境win10Pixel4Android13下载官方rom包......
  • docker evel=error msg="error reading the kernel parameter net.ipv4.vs.expire_nod
    我使用的是dockerswarm-#报错evel=errormsg="errorreadingthekernelparameternet.ipv4.vs.expire_nodest_conn"error="open/proc/sys/net/ipv4/vs/expire_nodest_conn:nosuchfileordirectory"-#查看是否开启ip_vslsmod|grepip_vs==============......
  • RockyLinux9.2升级 kernel6.X 内核
    RockyLinux9.2升级内核......
  • Linux服务器安装Kokkos-core 和 Kokkos-kernel
    说明由于实验室项目原因,需要跑一个Gmres算法,之前弄过kokkos,就想在kokkos-kernels里跑现有的GMRES算法库在此记录自己的安装的过程,以及自己踩过的一些坑。1.准备工作从Kokkos官网下载Kokkos以及Kokkos-kernels:https://github.com/kokkos/kokkos.git--Kokkos-corehttps:......
  • Kernel panic 堆栈信息怎么看
    Kernelpanic是指Linux内核遇到了无法继续执行的致命错误,此时会在屏幕上输出一些错误信息,其中就包括堆栈信息。堆栈信息是指发生错误时CPU执行的代码路径,可以通过堆栈信息来定位错误发生的位置。通常,堆栈信息会以类似下面的形式输出:Kernelpanic-notsyncing:Attempted......
  • Linux Kernel最新版本 4.0 正式发布啦
    近日由GregKroah-Hartman宣布了免费开源系统Linux Kernel4.0正式稳定版发布啦!Linux内核正式进入4.0全新内核时代,LinuxKernel4.0最值得关注的特性应该就是内核补丁无需重启系统,该技术基于Ksplice实现。 基本介绍长期支持版内核针对的是嵌入式设备,嵌入式设......
  • Linux Kernel最新版本 4.0 正式发布啦
    近日由GregKroah-Hartman宣布了免费开源系统Linux Kernel4.0正式稳定版发布啦!Linux内核正式进入4.0全新内核时代,LinuxKernel4.0最值得关注的特性应该就是内核补丁无需重启系统,该技术基于Ksplice实现。 基本介绍长期支持版内核针对的是嵌入式设备,嵌入式设......
  • Linux Kernel最新版本 4.0 正式发布啦
    近日由GregKroah-Hartman宣布了免费开源系统Linux Kernel4.0正式稳定版发布啦!Linux内核正式进入4.0全新内核时代,LinuxKernel4.0最值得关注的特性应该就是内核补丁无需重启系统,该技术基于Ksplice实现。 基本介绍长期支持版内核针对的是嵌入式设备,嵌入式设......