Thank you for your interest in contributing to Social Street Smart! This document provides detailed information on how to set up the project, run various components, and contribute effectively.
Social-Street-Smart/
├── client/ # Frontend (Chrome extension)
├── server/ # Backend services
│ ├── clickbait/
│ ├── hate-speech/
│ ├── fakenews/
│ ├── imageAPI/
│ └── news-origin/
├── ML/ # Machine learning models
│ ├── clickbait/
│ ├── hate-speech/
│ └── fakenews/
└── docker-compose.yml
- Node.js (v14+)
- Python (v3.8+)
- Docker and Docker Compose
-
Navigate to the client directory:
cd client
-
Install dependencies:
npm install
-
Build the extension:
npm run build
-
Load the extension in Chrome:
- Open Chrome and go to
chrome://extensions/
- Enable "Developer mode"
- Click "Load unpacked" and select the
dist
folder
- Open Chrome and go to
-
DockerHub : https://hub.docker.com/repository/docker/vishav9933
-
Click bait Api : https://sss-click-bait-latest.onrender.com/
-
Ssl Api : https://sss-ssl-latest.onrender.com/
-
Hate speech Api: https://sss-hate-speech-latest.onrender.com/
-
FakeNews Api - https://social-street-smart-latest.onrender.com/
The backend services are containerized using Docker:
-
Navigate to the server directory:
cd server
-
Start all services:
docker compose up
Some data and models are too large to be included in the GitHub repository. Follow these steps to obtain and set up the necessary files:
- Download the Fake News datasets from Kaggle: AOSSIE Fake News Detection Datasets
- Extract and place the files in
server/fakeNewsAPI/ML/
andserver/fakeNews/predictions/ML/
- Download the GloVe word embeddings (glove.6B.100d.txt) from Stanford NLP and place it in both directories mentioned above
- Download the Click-Bait dataset from Kaggle: AOSSIE Click-Bait Dataset
- Extract and place the files in the appropriate directory under
ML/clickbait/
- Download the Google News vector dataset from Kaggle: Google News Vectors
- Extract and place the files in the appropriate directory under
ML/
- Download the GloVe word embeddings (glove.6B.zip) from Stanford NLP
- Extract and place the files in
Toxic Comment/Data/glove.6B/
- Download the Toxic Comment Classification Challenge dataset from Kaggle: Jigsaw Toxic Comment Classification Challenge
- Extract and place the files in
Toxic Comment/Data/toxic-comment/
- Follow the instructions in this video to generate Google App Credentials: Generate Google App Credentials
- Save the generated JSON file as
GoogleAppCreds.json
inserver/imageAPI/tmp/
- Go to Google Custom Search API
- Click on "Get a Key" and follow the instructions to create a new project and generate an API key
- Create a
.env
file inserver/News_Origin/
and add your API key:GOOGLE_API_KEY=your_api_key_here
After building the extension, you can load it into Chrome as an unpacked extension. Any changes to the source code will require rebuilding the extension and reloading it in Chrome.
The Docker Compose file will start all the backend services. You can access them at the following endpoints:
- Clickbait API:
http://localhost:5000/predict
- Hate Speech API:
http://localhost:5001/predict
- Fake News API:
http://localhost:5002/predict
- Image Disinformation API:
http://localhost:5003/analyze
- News Origin API:
http://localhost:5004/origin
- Fork the repository
- Create a new branch:
git checkout -b feature-name
- Make your changes
- Run tests (if available)
- Commit your changes:
git commit -m 'Add some feature'
- Push to the branch:
git push origin feature-name
- Submit a pull request
- For Python: Follow PEP 8
- For JavaScript/TypeScript: Use ESLint with the project's configuration
Use the GitHub Issues tab to report bugs or suggest enhancements. Please provide as much detail as possible, including:
- A clear and descriptive title
- Steps to reproduce the issue
- Expected behavior
- Actual behavior
- Any relevant logs or screenshots
Thank you for contributing to Social Street Smart!