I quit my FAANG job / Future of Agentic AI
·
Others/TIL & Insight
https://jagilley.github.io/faang-blog.html?utm_source=Nomad+Academy&utm_campaign=34c1f817a5-EMAIL_CAMPAIGN_2025_03_14&utm_medium=email&utm_term=0_4313d957c9-7ac123137e-357695476 I quit my FAANG job because it'll be automated by the end of 2025Taking a medium-term look at the market dynamics surrounding my employment prompted me to quit a few weeks ago. I'm now convinced that my former job will b..
에러 해결: Command 'pip' not found & Permission denied
·
Others/Trubleshooting
conda 가상환경에서 clone한 레포지토리의 requirements를 install 하려는데, 아래 에러가 발생했다.   pip 모듈이 설치되어 있지 않아서 install이 안되는 건데, 문제는 서버가 내 권한이 아니라서 apt install python3-pip 커맨드가 동작하지 않는다는 것.(sudo도 마찬가지)그럴 땐 conda 환경에 python을 설치해주면 된다. 당연한 얘기지만, 가상환경 생성할 때 python 버전을 지정해주지 않은 경우 종종 이런 일이 생긴다. conda install python   설치 후 python과 pip 모듈이 잘 표시되는 것을 볼 수 있다.
논문 리뷰: ImageNet Classification with Deep Convolutional Neural Networks
·
AI/Paper Review
https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf오늘 리뷰할 논문은, AlexNet으로 유명한 딥러닝 아키텍처 논문이다. 2012년 ImageNet 데이터셋을 분류하는 경진대회에서 1등을 차지했다.   Introduction 최근 머신러닝 방법론의 발전으로 학습을 위한 더 큰 데이터셋을 수집하고, 더 강력한 모델을 학습하며, 과적합을 방지하기 위한 개선된 기술을 사용할 수 있게 되었다. 이전에는 레이블이 있는 이미지 데이터셋이 수만 개의 이미지로 제한되었고, 이는 간단한 인식 작업에는 적합했으나 복잡한 물체 인식을 위해선 더 큰 데이터셋이 필요하다는 점이 언급된다. 더 많은 물체를 인식하기 위해..
Demystifying Reasoning Models
·
Others/TIL & Insight
This article explores the emerging paradigm of reasoning models, which differ from traditional large language models (LLMs) in their problem-solving approach. Unlike conventional LLMs that generate responses in a fixed manner, reasoning models allocate variable computational effort to thinking before answering, improving their ability to decompose problems, detect errors, and explore alternative..
논문 리뷰: LLM-Pruner: On the Structural Pruning of Large Language Models
·
AI/Paper Review
오늘 리뷰할 논문은 2023년 NIPS에 publish 되었던 LLM-Pruner이다.  https://openreview.net/forum?id=J8Ajf9WfXP LLM-Pruner: On the Structural Pruning of Large Language ModelsLarge language models (LLMs) have shown remarkable capabilities in language understanding and generation. However, such impressive capability typically comes with a substantial model size, which...openreview.net  멀티모달, LLM 등으로 대표되는 large model..
논문 리뷰: FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low-Rank Adaptations
·
AI/Paper Review
https://arxiv.org/abs/2409.05976 FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low-Rank AdaptationsThe rapid development of Large Language Models (LLMs) has been pivotal in advancing AI, with pre-trained LLMs being adaptable to diverse downstream tasks through fine-tuning. Federated learning (FL) further enhances fine-tuning in a privacy-aware manner byarxiv.orgarXiv 2024..
Federated Learning with GAN-based Data Synthesis for Non-IID Client
·
AI/Paper Review
https://arxiv.org/abs/2206.05507 Federated Learning with GAN-based Data Synthesis for Non-IID ClientsFederated learning (FL) has recently emerged as a popular privacy-preserving collaborative learning paradigm. However, it suffers from the non-independent and identically distributed (non-IID) data among clients. In this paper, we propose a novel frameworkarxiv.org  2022 IJCAI accepted 연합학습 상황에서 ..
논문 리뷰: Effective Heterogeneous Federated Learning via Efficient Hypernetwork-based Weight Generation
·
AI/Paper Review
Effective Heterogeneous Federated Learning via Efficient Hypernetwork-based Weight GenerationWhile federated learning leverages distributed client resources, it faces challenges due to heterogeneous client capabilities. This necessitates allocating models suited to clients' resources and careful parameter aggregation to accommodate this heterogenearxiv.orgEffective Heterogeneous Federated Learni..
jaehee831
Neural Notes