Giter Club home page Giter Club logo

gdeltr2's Introduction

gdeltr2

gdeltr2R’s modern GDELT Project interface

What is the GDELT Project?

The Global Database of Events, Language, and Tone [GDELT] is a non profit whose initiative is to:

construct a catalog of human societal-scale behavior and beliefs across all countries of the world, connecting every person, organization, location, count, theme, news source, and event across the planet into a single massive network that captures what’s happening around the world, what its context is and who’s involved, and how the world is feeling about it, every single day.

GDELT was founded in 1994 and it’s data commences in 1979. Over the last two years the GDELT’s functionality and abilities have grown exponentially, for example in May 2014 GDELT processed 3,928,926 where as in May 2016 it processed 6,198,461. GDELT continues to evolve and integrate advanced machine learning tools including Google Cloud Vision, a data store that became available in February 2016.

This package wraps GDELT’s four primary data stores

Why gdeltr2?

My main motivation for this building package is simple, GDELT IS INCREDIBLE!!

Accessing GDELT’s data gold is doable but either difficult or costly.

Currently, anyone proficient in SQL can access the data via Google Big Query. The problem is that even if you want to use SQL, users have to pay above a certain API call threshold and then you still need another layer of connectivity to explore the data in R.

Although R has two existing packages that allow users to interact with portions of GDELT’s data outside of Big Query:

These packages are old, incomplete and difficult to use. It is my hope that gdelt2r allows the R user easy access to GDELT’s data allowing for faster, more exhilarating data visualizations and analysis!

PRIOR TO INSTALL

This package may require the development versions of devtools and dplyr so, to be safe, before installation run the following code:

devtools::install_github("hadley/devtools")
devtools::install_github("hadley/dplyr")
devtools::install_github("hafen/trelliscopejs")

Installation

devtools::install_github("abresler/gdeltr2")

Function Ontology

The package currently consists of two function families, data acquisition and data tidying.

The package’s data acquisition functions begin with get_urls_ for acquiring data store log information, get_codes_ for acquiring code books and get_data_ for downloading and reading data.

The data tidying functions begin with parse_ and they apply to a number of the features in the gkg and vgkg data stores that will get described in further detail farther below.

CAUTION

  • gdeltr2 requires an internet connection for any data retrieval function
  • The package’s get_gkg_data and get_gdelt_event_ functions are extremely bandwidth intensive given the download sizes of these data stores.
  • The package is very memory intensive given the unzipped size of the GDELT Event, Global Knowledge Graph and Visual Knowledge Graph files.

Primary Functions

  • Full Text API
    • ft_v2_api() - retrieves descriptive data from V2 API see this blog post for more on how to use this
    • ft_trending_terms() - retrieves trending terms over the last 15 minutes. The term can be a GDELT tag, location, person, place, or thing.
  • GDELT Events
    • get_urls_gdelt_event_log() - retrieves descriptive data and urls for all available GDELT event downloads.
    • get_data_gdelt_period_event_totals() - retrieves summary event data for a given a period [monthly, daily, yearly]; this can be grouped by country.
    • get_data_gdelt_periods_event() - retrieves GDELT event data for a specified periods. Periods are by 4 digit years from 1979 to 2005, 6 digit year month from January 2006 to March 2013, and 8 digit year month day code thereafter.
  • Global Knowledge Graph
    • get_urls_gkg_15_minute_log - retrieves GKG 15 minute capture logs; data begins February 18th, 2015 for the three table types
      • gkg: This is the full gkg data set and contains columns that may require further data tidying tying to a GKG Record ID
      • export: This data replicates the output contained in the GDELT event table for processed documents tying to a Global Event ID
      • mentions: This data contains information surrounding the processed events, including sources, tone, location within a document and this tying to a Global Event ID
    • get_urls_gkg_daily_summaries - retrieves daily gkg capture logs; data begins in April of 2013.
      • Each day contains a count file and the full gkg output.
    • get_data_gkg_days_summary() retrieves GKG daily summary data for specified date(s), this captures count files by is_count_file = T
    • get_data_gkg_days_detailed() - retrieves GKG data from the data cached every 15 minutes for specified date(s) for a given table. The table can be one of c('gkg', 'export', 'mentions'). This function may require significant bandwidth and memory given the potential file sizes.
  • American Television Knowledge Graph
    • get_urls_gkg_tv_daily_summaries() - retrieves available dates
      • gkg_tv_days() - retrieves data for specified dates. Note that the data is on a 2 day lag so the most recent data is 2 days old.
  • Location Sentiment API
    • dictionary_stability_locations() - retrieves possible locations
    • dictionary_stability_locations() - retrieves instability data for a specified location and time period. Variables can be c('instability', 'conflict', 'protest', 'tone', 'relative mentions') Time periods can be c('daily', '15 minutes'), for daily the data is the average per day of the specified variable for the last 180 days and for 15 minutes the data is the variable reading every 15 minutes for the last week.
  • Visual Global Knowledge Graph
    • get_urls_vgkg() - retrieves VGKG log urls
    • get_data_vgkg_dates() - retrieves VGKG data from the data cached every 15 minutes for specified date(s).

Tidying Functions

Many of the columns in the GKG output are concatenated and require further parsing for proper analysis. These function tidy those concatenated columns, note given file sizes the functions may be time consuming.

V2 Full Text API

You can refer to this blog post that discusses how to use this functionality.

Global Knowledge Graph

  • parse_gkg_mentioned_names() - parses mentioned names
  • parse_gkg_mentioned_people() - parses mentioned people
  • parse_gkg_mentioned_organizations() - parses mentioned organizations
  • parse_gkg_mentioned_numerics() - parses mentioned numeric figures
  • parse_gkg_mentioned_themes() - parses mentioned themes, ties to CAMEO Theme Codes
  • parse_gkg_mentioned_gcams() - parses resolved GCAMs ties GCAM code book.
  • parse_gkg_mentioned_dates() - parses mentioned dates according to the GKG scheme
  • parse_xml_extras() - parses XML metadata from GKG table
Visual Global Knowledge Graph
  • parse_vgkg_labels() - parses and labels learned items
  • parse_vgkg_landmarks() - parses and geocodes learned landmarks
  • parse_vgkg_logos() - parses learned logos
  • parse_vgkg_safe_search() - parses safe search likelihoods
  • parse_vgkg_faces() - parses learned faces
  • parse_vgkg_ocr() - parses OCR’d items
  • parse_vgkg_languages() - parses languages

Code Books

All these the GDELT and GKG datasets contain a whole host of codes that need resolution to be human readable. The package contains easy access to these code books to allow for that resolution. These functions provide access to the code books:

  • get_codes_gcam() - retrieves Global Content Analysis Measurement [GCAM] codes
  • get_codes_cameo_country() - retrieves Conflict and Mediation Event Observations [CAMEO] country codes
  • get_codes_cameo_ethnic() - retrieves cameo ethnic codes
  • get_codes_cameo_events() - retrieves cameo event codes
  • get_codes_gkg_themes() - retrieves gkg theme codes
  • get_codes_cameo_type() - retrieves cameo type codes
  • get_codes_cameo_religion() - retrieves cameo religion codes
  • get_codes_cameo_known_groups() - retrieves cameo known group codes

Coming Soon

  • Vignettes
  • Generic data visualization functions
  • Generic machine learning and data analysis functions
  • bigrquery integration
  • Third party database mirror

EXAMPLES

library(gdeltr2)
load_needed_packages(c('dplyr', 'magrittr'))

GDELT Event Data

events_1989 <-
  get_data_gdelt_periods_event(
    periods = 1989,
    return_message = T
  )

GKG Data

gkg_summary_count_may_15_16_2014 <-
  get_data_gkg_days_summary(
    dates = c('2014-05-15', '2014-05-16'),
    is_count_file = T,
    return_message = T
  )

gkg_full_june_2_2016 <-
  get_data_gkg_days_detailed(
    dates = c("2016-06-02"),
    table_name = 'gkg',
    return_message = T
  )

gkg_mentions_may_12_2016 <-
  get_data_gkg_days_detailed(
    dates = c("2016-05-12"),
    table_name = 'mentions',
    return_message = T
  )

GKG Television Data

gkg_tv_test <- 
  get_data_gkg_tv_days(dates = c("2016-06-17", "2016-06-16"))

GKG Tidying

load_needed_packages(c('magrittr'))

gkg_test <- 
  get_data_gkg_days_detailed(only_most_recent = T, table_name = 'gkg')

gkg_sample_df <- 
  gkg_test %>% 
  sample_n(1000)

xml_extra_df <- 
  gkg_sample_df %>% 
  parse_gkg_xml_extras(filter_na = T, return_wide = F)

article_tone <- 
  gkg_sample_df %>% 
  parse_gkg_mentioned_article_tone(filter_na = T, return_wide = T)

gkg_dates <- 
  gkg_sample_df %>% 
  parse_gkg_mentioned_dates(filter_na = T, return_wide = T)

gkg_gcams <- 
  gkg_sample_df %>% 
  parse_gkg_mentioned_gcams(filter_na = T, return_wide = T)

gkg_event_counts <- 
  gkg_sample_df %>% 
  parse_gkg_mentioned_event_counts(filter_na = T, return_wide = T)

gkg_locations <- 
  gkg_sample_df %>% 
  parse_gkg_mentioned_locations(filter_na = T, return_wide = T)

gkg_names <- 
  gkg_sample_df %>% 
  parse_gkg_mentioned_names(filter_na = T, return_wide = T)

gkg_themes <- 
  gkg_sample_df %>% 
  parse_gkg_mentioned_themes(theme_column = 'charLoc',
                                      filter_na = T, return_wide = T)

gkg_numerics <- 
  gkg_sample_df %>% 
  parse_gkg_mentioned_numerics(filter_na = T, return_wide = T)

gkg_orgs <-
  gkg_sample_df %>% 
  parse_gkg_mentioned_organizations(organization_column = 'charLoc', 
                                             filter_na = T, return_wide = T)

gkg_quotes <-
  gkg_sample_df %>% 
  parse_gkg_mentioned_quotes(filter_na = T, return_wide = T)

gkg_people <- 
  gkg_sample_df %>% 
  parse_gkg_mentioned_people(people_column = 'charLoc', filter_na = T, return_wide = T)

VGKG Tidying

vgkg_test <- 
  get_data_vgkg_dates(only_most_recent = T)

vgkg_sample <- 
  vgkg_test %>% 
  sample_n(1000)

vgkg_labels <- 
  vgkg_sample %>% 
  parse_vgkg_labels(return_wide = T)

faces_test <- 
  vgkg_sample %>% 
  parse_vgkg_faces(return_wide = T)

landmarks_test <- 
  vgkg_sample %>% 
  parse_vgkg_landmarks(return_wide = F)

logos_test <- 
  vgkg_sample %>% 
  parse_vgkg_logos(return_wide = T)

ocr_test <- 
  vgkg_sample %>% 
  parse_vgkg_ocr(return_wide = F)

search_test <- 
  vgkg_sample %>% 
  parse_vgkg_safe_search(return_wide = F)

Sentiment API

location_codes <-
  dictionary_stability_locations()
location_test <-
  instability_api_locations(
    location_ids = c("US", "IS", "CA", "TU", "CH", "UK", "IR"),
    use_multi_locations = c(T, F),
    variable_names = c('instability', 'tone', 'protest', 'conflict'),
    time_periods = c('daily'),
    nest_data = F,
    days_moving_average = NA,
    return_wide = T,
    return_message = T
  )

location_test %>%
  dplyr::filter(codeLocation %>% is.na()) %>%
  group_by(nameLocation) %>%
  summarise_at(.vars = c('instability', 'tone', 'protest', 'conflict'),
               funs(mean)) %>%
  arrange(desc(instability))location_codes <-
  dictionary_stability_locations()
location_test <-
  instability_api_locations(
    location_ids = c("US", "IS", "CA", "TU", "CH", "UK", "IR"),
    use_multi_locations = c(T, F),
    variable_names = c('instability', 'tone', 'protest', 'conflict'),
    time_periods = c('daily'),
    nest_data = F,
    days_moving_average = NA,
    return_wide = T,
    return_message = T
  )

location_test %>%
  dplyr::filter(codeLocation %>% is.na()) %>%
  group_by(nameLocation) %>%
  summarise_at(.vars = c('instability', 'tone', 'protest', 'conflict'),
               funs(mean)) %>%
  arrange(desc(instability))

gdeltr2's People

Contributors

abresler avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gdeltr2's Issues

Word Clouds Issue

The ft_v2_api() function returns an error message when using the modes 'WordCloud*':
Error in mutate():
! Problem while computing modeSearch = modeSearch %>% str_to_upper().
Caused by error in stri_trans_toupper():
! object 'modeSearch' not found

Is that possible to use data.table::fread to replace readr::read_tsv in gdeltr2::get_gdelt_url_data?

There are two problems aroused when I use gdeltr2::get_data_gdelt_periods_event() function.

The first one is much crucial, that it will falsely guess data type in some column. For example, if you download the zip file of 201303, the codeActor1 column will be guessing as Boolean because the top 1000 rows are empty, which should be a character. I know that can be solved by assign the type of each column in readr::read_tsv, and that will take lots of time to deal with but it could be a solution.

The second one is about efficiency. The same 201303 file mentioned above is around 620 MB, it takes several minutes to read on my computer (and crash because of lacking brackets after lubridate::ymd in line 1119). I have tried to use data.table::fread() to replace it, and I gain a considerable efficient boost. And the only problem I met was that I should add parameter data.table = FALSE to avoid changing data structure to data.table.

I am not an expert on package making, perhaps this is not a good way to solve problems. The second problem is not that important, but the first one I think is worth mentioned.

Issues downloading GDELT event data prior to 2013-04-01

I'm trying to access GDELT Event data (V1) prior to 2013. I took the code below from the gdeltr2 vignette:

> events1983 <-
+   get_data_gdelt_periods_event(periods = 1983)

However, this returns 0 events:

You got 2971 GDELT Global Knowledge Graph URLS from 2013-04-01 to 2021-01-28
|==================================================================================> =======| 100% 222 MB
You retrieved 0 GDELT events for the period of 1983

Its the same for any year prior to 2013. Yet the code seems to work fine for those periods that are available at a daily basis:

> events20130401 <-
+   get_data_gdelt_periods_event(periods = 20130401)

Downloaded, parsed and imported http://data.gdeltproject.org/events/20130401.export.CSV.zip
You retrieved 27758 GDELT events for the period of 20130401

Any thoughts? Thanks!

Some functions in gdeltr2 do not work

  1. When I use gdeltr2 package in R to extract Gdelt data, I found that many function have different names or do not work anymore. Is there latest instructions about this package?
  2. Does anyone have examples about how to extract Gdelt data by keywords (in R or Python). It seems that the "Gdelt Analytical Service" which can provide keyword search and data download also does not work anymore.

Thank you very much!

domain, language, and countries: ft_v2_api()

Hi - thank you so much for building this package. I'm having a bit of trouble filtering by domains, source_language, and/or source_countries with the ft_v2_api() command and was curious if you could provide any examples showing the correct syntax. The problem is that I'm writing the domain names, languages, and countries, but it is still providing the whole universe of the articles that contain my key words (not filtering by my additional parameters). The command does correctly search for terms, but does not encompass my additional restrictions.

Thank you so much!

Read the downloaded data

I am trying to open the downloaded gdelt data with read.csv.

The gdelt event data, to my knowledge, should have 61 variables (61 columns)

If I open the .csv file without setting "sep", I will get a data frame with 7 columns. This is because the raw data use "\t" to seperate variables.

If I read useing "sep="\t"", I still only get a data frame with 58 columns. What's worse, many column apparently appear at the wrong columns.

I wonder how could this happen?

Errors trying to run the tutorial

I attempted to run: learning_to_program_with_gdeltr2.r and encountered errors.

R version 4.0.1 (2020-06-06) -- "See Things Now"
Copyright (C) 2020 The R Foundation for Statistical Computing
Platform: x86_64-w64-mingw32/x64 (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

Natural language support but running in an English locale

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

packages_to_install <-

  • c("devtools", "dplyr", "rlang", "tidyr", "purrr", "stringr", "lubridate",
  • "readr","tidyr", "ggplot2", "highcharter", "tidyverse", "tibble", "hrbrthemes",
    
  • "ggthemes", "jsonlite")
    

lapply(packages_to_install, install.packages, character.only = T)
Error in install.packages : Updating loaded packages
WARNING: Rtools is required to build R packages but is not currently installed. Please download and install the appropriate version of Rtools before proceeding:

https://cran.rstudio.com/bin/windows/Rtools/
Installing package into ‘C:/Users/james/Desktop/Documents/R/win-library/4.0’
(as ‘lib’ is unspecified)
trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/dplyr_1.0.0.zip'
Content type 'application/zip' length 1302941 bytes (1.2 MB)
downloaded 1.2 MB

package ‘dplyr’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
C:\Users\james\AppData\Local\Temp\RtmpoRW9yz\downloaded_packages
WARNING: Rtools is required to build R packages but is not currently installed. Please download and install the appropriate version of Rtools before proceeding:

https://cran.rstudio.com/bin/windows/Rtools/
Installing package into ‘C:/Users/james/Desktop/Documents/R/win-library/4.0’
(as ‘lib’ is unspecified)

There is a binary version available but the source version
is later:
binary source needs_compilation
rlang 0.4.6 0.4.7 TRUE

Binaries will be installed
trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/rlang_0.4.6.zip'
Content type 'application/zip' length 1117248 bytes (1.1 MB)
downloaded 1.1 MB

package ‘rlang’ successfully unpacked and MD5 sums checked
Warning in install.packages :
cannot remove prior installation of package ‘rlang’
Warning in install.packages :
problem copying C:\Users\james\Desktop\Documents\R\win-library\4.0\00LOCK\rlang\libs\x64\rlang.dll to C:\Users\james\Desktop\Documents\R\win-library\4.0\rlang\libs\x64\rlang.dll: Permission denied
Warning in install.packages :
restored ‘rlang’

The downloaded binary packages are in
C:\Users\james\AppData\Local\Temp\RtmpoRW9yz\downloaded_packages
WARNING: Rtools is required to build R packages but is not currently installed. Please download and install the appropriate version of Rtools before proceeding:

https://cran.rstudio.com/bin/windows/Rtools/
Installing package into ‘C:/Users/james/Desktop/Documents/R/win-library/4.0’
(as ‘lib’ is unspecified)
trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/tidyr_1.1.0.zip'
Content type 'application/zip' length 1514102 bytes (1.4 MB)
downloaded 1.4 MB

package ‘tidyr’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
C:\Users\james\AppData\Local\Temp\RtmpoRW9yz\downloaded_packages
WARNING: Rtools is required to build R packages but is not currently installed. Please download and install the appropriate version of Rtools before proceeding:

https://cran.rstudio.com/bin/windows/Rtools/
Installing package into ‘C:/Users/james/Desktop/Documents/R/win-library/4.0’
(as ‘lib’ is unspecified)
trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/purrr_0.3.4.zip'
Content type 'application/zip' length 429884 bytes (419 KB)
downloaded 419 KB

package ‘purrr’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
C:\Users\james\AppData\Local\Temp\RtmpoRW9yz\downloaded_packages
WARNING: Rtools is required to build R packages but is not currently installed. Please download and install the appropriate version of Rtools before proceeding:

https://cran.rstudio.com/bin/windows/Rtools/
Installing package into ‘C:/Users/james/Desktop/Documents/R/win-library/4.0’
(as ‘lib’ is unspecified)
trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/stringr_1.4.0.zip'
Content type 'application/zip' length 216792 bytes (211 KB)
downloaded 211 KB

package ‘stringr’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
C:\Users\james\AppData\Local\Temp\RtmpoRW9yz\downloaded_packages
WARNING: Rtools is required to build R packages but is not currently installed. Please download and install the appropriate version of Rtools before proceeding:

https://cran.rstudio.com/bin/windows/Rtools/
Installing package into ‘C:/Users/james/Desktop/Documents/R/win-library/4.0’
(as ‘lib’ is unspecified)
trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/lubridate_1.7.9.zip'
Content type 'application/zip' length 1748364 bytes (1.7 MB)
downloaded 1.7 MB

package ‘lubridate’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
C:\Users\james\AppData\Local\Temp\RtmpoRW9yz\downloaded_packages
WARNING: Rtools is required to build R packages but is not currently installed. Please download and install the appropriate version of Rtools before proceeding:

https://cran.rstudio.com/bin/windows/Rtools/
Installing package into ‘C:/Users/james/Desktop/Documents/R/win-library/4.0’
(as ‘lib’ is unspecified)
trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/readr_1.3.1.zip'
Content type 'application/zip' length 1716096 bytes (1.6 MB)
downloaded 1.6 MB

package ‘readr’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
C:\Users\james\AppData\Local\Temp\RtmpoRW9yz\downloaded_packages
WARNING: Rtools is required to build R packages but is not currently installed. Please download and install the appropriate version of Rtools before proceeding:

https://cran.rstudio.com/bin/windows/Rtools/
Installing package into ‘C:/Users/james/Desktop/Documents/R/win-library/4.0’
(as ‘lib’ is unspecified)
trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/tidyr_1.1.0.zip'
Content type 'application/zip' length 1514102 bytes (1.4 MB)
downloaded 1.4 MB

package ‘tidyr’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
C:\Users\james\AppData\Local\Temp\RtmpoRW9yz\downloaded_packages
WARNING: Rtools is required to build R packages but is not currently installed. Please download and install the appropriate version of Rtools before proceeding:

https://cran.rstudio.com/bin/windows/Rtools/
Installing package into ‘C:/Users/james/Desktop/Documents/R/win-library/4.0’
(as ‘lib’ is unspecified)
trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/ggplot2_3.3.2.zip'
Content type 'application/zip' length 4067278 bytes (3.9 MB)
downloaded 3.9 MB

package ‘ggplot2’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
C:\Users\james\AppData\Local\Temp\RtmpoRW9yz\downloaded_packages
WARNING: Rtools is required to build R packages but is not currently installed. Please download and install the appropriate version of Rtools before proceeding:

https://cran.rstudio.com/bin/windows/Rtools/
Installing package into ‘C:/Users/james/Desktop/Documents/R/win-library/4.0’
(as ‘lib’ is unspecified)
trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/highcharter_0.7.0.zip'
Content type 'application/zip' length 3001981 bytes (2.9 MB)
downloaded 2.9 MB

package ‘highcharter’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
C:\Users\james\AppData\Local\Temp\RtmpoRW9yz\downloaded_packages
WARNING: Rtools is required to build R packages but is not currently installed. Please download and install the appropriate version of Rtools before proceeding:

https://cran.rstudio.com/bin/windows/Rtools/
Installing package into ‘C:/Users/james/Desktop/Documents/R/win-library/4.0’
(as ‘lib’ is unspecified)
trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/tidyverse_1.3.0.zip'
Content type 'application/zip' length 440111 bytes (429 KB)
downloaded 429 KB

package ‘tidyverse’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
C:\Users\james\AppData\Local\Temp\RtmpoRW9yz\downloaded_packages
WARNING: Rtools is required to build R packages but is not currently installed. Please download and install the appropriate version of Rtools before proceeding:

https://cran.rstudio.com/bin/windows/Rtools/
Installing package into ‘C:/Users/james/Desktop/Documents/R/win-library/4.0’
(as ‘lib’ is unspecified)

There is a binary version available but the source version
is later:
binary source needs_compilation
tibble 3.0.2 3.0.3 TRUE

Binaries will be installed
trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/tibble_3.0.2.zip'
Content type 'application/zip' length 413689 bytes (403 KB)
downloaded 403 KB

package ‘tibble’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
C:\Users\james\AppData\Local\Temp\RtmpoRW9yz\downloaded_packages
WARNING: Rtools is required to build R packages but is not currently installed. Please download and install the appropriate version of Rtools before proceeding:

https://cran.rstudio.com/bin/windows/Rtools/
Installing package into ‘C:/Users/james/Desktop/Documents/R/win-library/4.0’
(as ‘lib’ is unspecified)
trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/hrbrthemes_0.8.0.zip'
Content type 'application/zip' length 2326933 bytes (2.2 MB)
downloaded 2.2 MB

package ‘hrbrthemes’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
C:\Users\james\AppData\Local\Temp\RtmpoRW9yz\downloaded_packages
WARNING: Rtools is required to build R packages but is not currently installed. Please download and install the appropriate version of Rtools before proceeding:

https://cran.rstudio.com/bin/windows/Rtools/
Installing package into ‘C:/Users/james/Desktop/Documents/R/win-library/4.0’
(as ‘lib’ is unspecified)
trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/ggthemes_4.2.0.zip'
Content type 'application/zip' length 440978 bytes (430 KB)
downloaded 430 KB

package ‘ggthemes’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
C:\Users\james\AppData\Local\Temp\RtmpoRW9yz\downloaded_packages
WARNING: Rtools is required to build R packages but is not currently installed. Please download and install the appropriate version of Rtools before proceeding:

https://cran.rstudio.com/bin/windows/Rtools/
Installing package into ‘C:/Users/james/Desktop/Documents/R/win-library/4.0’
(as ‘lib’ is unspecified)
trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/jsonlite_1.7.0.zip'
Content type 'application/zip' length 1170880 bytes (1.1 MB)
downloaded 1.1 MB

package ‘jsonlite’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
C:\Users\james\AppData\Local\Temp\RtmpoRW9yz\downloaded_packages
[[1]]
NULL

[[2]]
NULL

[[3]]
NULL

[[4]]
NULL

[[5]]
NULL

[[6]]
NULL

[[7]]
NULL

[[8]]
NULL

[[9]]
NULL

[[10]]
NULL

[[11]]
NULL

[[12]]
NULL

[[13]]
NULL

[[14]]
NULL

[[15]]
NULL

[[16]]
NULL

devtools::install_github("hafen/trelliscopejs")
Downloading GitHub repo hafen/trelliscopejs@master
These packages have more recent versions available.
It is recommended to update all of them.
Which would you like to update?

1: All
2: CRAN packages only
3: None
4: rlang (0.4.6 -> 0.4.7) [CRAN]
5: pillar (1.4.4 -> 1.4.6) [CRAN]
6: pkgbuild (1.0.8 -> 1.1.0) [CRAN]
7: backports (1.1.7 -> 1.1.8) [CRAN]
8: processx (3.4.2 -> 3.4.3) [CRAN]
9: xfun (0.14 -> 0.15 ) [CRAN]
10: tibble (3.0.2 -> 3.0.3) [CRAN]
11: Rcpp (1.0.4.6 -> 1.0.5) [CRAN]
12: isoband (0.2.1 -> 0.2.2) [CRAN]

Enter one or more numbers, or an empty line to skip updates:
devtools::install_github("abresler/gdeltr2")
Enter one or more numbers, or an empty line to skip updates:
1
rlang (0.4.6 -> 0.4.7 ) [CRAN]
pillar (1.4.4 -> 1.4.6 ) [CRAN]
pkgbuild (1.0.8 -> 1.1.0 ) [CRAN]
backports (1.1.7 -> 1.1.8 ) [CRAN]
processx (3.4.2 -> 3.4.3 ) [CRAN]
xfun (0.14 -> 0.15 ) [CRAN]
tibble (3.0.2 -> 3.0.3 ) [CRAN]
Rcpp (1.0.4.6 -> 1.0.5 ) [CRAN]
isoband (0.2.1 -> 0.2.2 ) [CRAN]
checkmate (NA -> 2.0.0 ) [CRAN]
moments (NA -> 0.14 ) [CRAN]
diptest (NA -> 0.75-7) [CRAN]
mclust (NA -> 5.4.6 ) [CRAN]
Installing 13 packages: rlang, pillar, pkgbuild, backports, processx, xfun, tibble, Rcpp, isoband, checkmate, moments, diptest, mclust
Installing packages into ‘C:/Users/james/Desktop/Documents/R/win-library/4.0’
(as ‘lib’ is unspecified)

There are binary versions available but the source
versions are later:
binary source needs_compilation
rlang 0.4.6 0.4.7 TRUE
pillar 1.4.4 1.4.6 FALSE
pkgbuild 1.0.8 1.1.0 FALSE
backports 1.1.7 1.1.8 TRUE
tibble 3.0.2 3.0.3 TRUE

trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/processx_3.4.3.zip'
Content type 'application/zip' length 1147666 bytes (1.1 MB)
downloaded 1.1 MB

trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/xfun_0.15.zip'
Content type 'application/zip' length 229776 bytes (224 KB)
downloaded 224 KB

trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/Rcpp_1.0.5.zip'
Content type 'application/zip' length 3265784 bytes (3.1 MB)
downloaded 3.1 MB

trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/isoband_0.2.2.zip'
Content type 'application/zip' length 3408747 bytes (3.3 MB)
downloaded 3.3 MB

trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/checkmate_2.0.0.zip'
Content type 'application/zip' length 699101 bytes (682 KB)
downloaded 682 KB

trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/moments_0.14.zip'
Content type 'application/zip' length 56099 bytes (54 KB)
downloaded 54 KB

trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/diptest_0.75-7.zip'
Content type 'application/zip' length 364327 bytes (355 KB)
downloaded 355 KB

trying URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/mclust_5.4.6.zip'
Content type 'application/zip' length 4630961 bytes (4.4 MB)
downloaded 4.4 MB

package ‘processx’ successfully unpacked and MD5 sums checked
Error: Failed to install 'trelliscopejs' from GitHub:
(converted from warning) cannot remove prior installation of package ‘processx’

"My Text" == 'My Text'
[1] TRUE

"My Text" == 'my text'
[1] FALSE

Objects -----------------------------------------------------------------

Objects

my_favorite_team <-

  • "Brooklyn Nets"

favoriteNBAPlayerEver <-

  • "Mitch Richmond"

my_favorite_team
[1] "Brooklyn Nets"

favoriteNBAPlayerEver
[1] "Mitch Richmond"

my_family <-

  • c("Alex", "Liz", "Chase", "Theo")

their_type <-

  • c("Adult", "Adult", "Toy Poodle", "Baby")

their_age <-

  • c(33, 32, 2, 0)

my_family
[1] "Alex" "Liz" "Chase" "Theo"
their_type
[1] "Adult" "Adult" "Toy Poodle" "Baby"
their_age
[1] 33 32 2 0

all_objects <- c(my_family, their_type,their_age)
all_objects
[1] "Alex" "Liz" "Chase" "Theo"
[5] "Adult" "Adult" "Toy Poodle" "Baby"
[9] "33" "32" "2" "0"

Data Frames -------------------------------------------------------------

Data Frame

library(dplyr)

Attaching package: ‘dplyr’

The following objects are masked from ‘package:stats’:

filter, lag

The following objects are masked from ‘package:base’:

intersect, setdiff, setequal, union

Warning message:
package ‘dplyr’ was built under R version 4.0.2

df_bresler_family <-

  • data_frame(name = my_family, type = their_type, age = their_age)
    Warning message:
    data_frame() is deprecated as of tibble 1.1.0.
    Please use tibble() instead.
    This warning is displayed once every 8 hours.
    Call lifecycle::last_warnings() to see where this warning was generated.

df_bresler_family

A tibble: 4 x 3

name type age

1 Alex Adult 33
2 Liz Adult 32
3 Chase Toy Poodle 2
4 Theo Baby 0

View(df_bresler_family)

Working with the API

api ---------------------------------------------------------------------

library(gdeltr2)
Error in library(gdeltr2) : there is no package called ‘gdeltr2’

Terms

sports_terms <-

  • c('"Brooklyn Nets"', "Caris LeVert", '"Kyrie Irving" Trade', '"Luka Doncic"',
  • 'NBA "Draft Prospect"', '"Jarrett Allen"')
    

political_terms <-

  • c('"Bill Perkins"', '"New York City" "City Counsel"')

finance_real_estate_terms <-

  • c("Eastdil", "Condo Bubble", '"JBG Smith"', '"CPPIB"', "Anbang",
  • "WeWork", '"Goldman Sachs"' , 'Blackstone "Real Estate"')
    

other_terms <-

  • c("Supergoop", '"LNG"', 'Maryland "High School Football"',
  • '"Jared Kushner"', '"Eddie Huang"')
    

my_terms <-

  • c(sports_terms, political_terms, finance_real_estate_terms, other_terms)

domains -----------------------------------------------------------------

news_domains <-

  • c("nypost.com", "washingtonpost.com", "wsj.com", "gothamgazette.com")

sports_domains <-

  • c("espn.com", "netsdaily.com")

finance_real_estate_domains <-

  • c("realdeal.com", "zerohedge.com", "institutionalinvestor.com", 'pionline.com',
  • "curbed.com", "archdaily.com")
    

random_domains <-

  • c("tmz.com", "snopes.com", "alphr.com", "oilprice.com")

my_domains <-

  • c(news_domains, sports_domains, finance_real_estate_domains, random_domains)

GKG ---------------------------------------------------------------------

df_gkg <-

  • get_gdelt_codebook_ft_api(code_book = "gkg")
    Error in get_gdelt_codebook_ft_api(code_book = "gkg") :
    could not find function "get_gdelt_codebook_ft_api"

my_themes <-

  • c("ECON_WORLDCURRENCIES_CHINESE_YUAN", # stories about china's currency -- god way to find stories about china's economy
  • "ECON_BUBBLE", # articles about economic bubble
    
  • "TAX_FNCACT_BROKER", # articles about brokers of things
    
  • "ECON_HOUSING_PRICES", # articls about housing prices
    
  • "ECON_BITCOIN", # articles about bitcoin
    
  • "ELECTION_FRAUD", # articles about election fraud
    
  • "SOC_POINTSOFINTEREST_GOVERNMENT_BUILDINGS", # articles about government buildings
    
  • "WB_1277_BANKRUPTCY_AND_LIQUIDATION", # articles about bankruptcy
    
  • "WB_639_REPRODUCTIVE_MATERNAL_AND_CHILD_HEALTH", # articles about pregnancy and child health
    
  • "WB_2151_CHILD_DEVELOPMENT", # articles about child development
    
  • "TAX_FNCACT_BUILDER" # articles about builders
    
  • )

set.seed(1234)

random_themes <-

  • df_gkg %>% pull(idGKGTheme) %>% sample(3)
    Error in eval(lhs, parent, parent) : object 'df_gkg' not found

my_themes <-

  • c(my_themes, random_themes)
    Error: object 'random_themes' not found

OCR ---------------------------------------------------------------------

my_ocr <-

  • c(
  • "Brooklyn Nets",
    
  • "Panerai",
    
  • "Four Seasons",
    
  • "NBA",
    
  • "Goldman Sachs",
    
  • "Philadelphia Eagles",
    
  • "Supergoop",
    
  • "Boston Celtics",
    
  • "Big Baller Brand",
    
  • "BBB",
    
  • "Boston Properties"
    
  • )

imagetags ---------------------------------------------------------------

df_imagetags <-

  • get_gdelt_codebook_ft_api(code_book = "imagetags")
    Error in get_gdelt_codebook_ft_api(code_book = "imagetags") :
    could not find function "get_gdelt_codebook_ft_api"

View(df_imagetags)
Error in View : object 'df_imagetags' not found

my_image_tags <-

  • c("Toy Poodle", "poodle", "commercial building", "basketball player", "supermodel")

Image Web ---------------------------------------------------------------

df_imageweb <-

  • get_gdelt_codebook_ft_api(code_book = "imageweb")
    Error in get_gdelt_codebook_ft_api(code_book = "imageweb") :
    could not find function "get_gdelt_codebook_ft_api"

View(df_imageweb)
Error in View : object 'df_imageweb' not found

my_image_web <-

  • c("Jared Kushner", "Empire State Building", "New York City", "Ivanka Trump",
  • "Tesla Model 3", "Jeremy Lin", "NBA", "Brooklyn Nets"
    
  • )

other_parameters --------------------------------------------------------

my_timespan <-

  • "36 Hours"

df_countries <-

  • get_gdelt_codebook_ft_api(code_book = "countries")
    Error in get_gdelt_codebook_ft_api(code_book = "countries") :
    could not find function "get_gdelt_codebook_ft_api"

View(df_countries)
Error in View : object 'df_countries' not found

my_trelliscope_parameters <-

  • list(
  • rows = 1,
  • columns = 2,
  • path = NULL
  • )

Artlist -----------------------------------------------------------------

get_data_ft_v2_api(

  • terms = my_terms,
  • domains = my_domains,
  • images_web_tag = my_image_web,
  • images_tag = my_image_tags,
  • images_ocr = my_ocr,
  • maximum_records =
  • gkg_themes = my_themes,
    Error: unexpected '=' in:
    " maximum_records =
    gkg_themes ="

modes = c("Artlist"),
Error: unexpected ',' in " modes = c("Artlist"),"
timespans = my_timespan,
Error: unexpected ',' in " timespans = my_timespan,"
trelliscope_parameters = my_trelliscope_parameters
)
Error: unexpected ')' in ")"
trelliscopeImage
Error: object 'trelliscopeImage' not found

Timeline -----------------------------------------------------------------

get_data_ft_v2_api(

  • terms = my_terms,
  • domains = my_domains,
  • images_web_tag = my_image_web,
  • images_tag = my_image_tags,
  • images_ocr = my_ocr,
  • gkg_themes = my_themes,
  • modes = c("TimelineVolInfo"),
  • timespans = "12 Weeks",
  • trelliscope_parameters = my_trelliscope_parameters
  • )
    Error in get_data_ft_v2_api(terms = my_terms, domains = my_domains, images_web_tag = my_image_web, :
    could not find function "get_data_ft_v2_api"

trelliscopeHighcharter
Error: object 'trelliscopeHighcharter' not found

wordclouds --------------------------------------------------------------

get_data_ft_v2_api(

  • terms = my_terms,
  • domains = my_domains,
  • images_web_tag = my_image_web,
  • images_tag = my_image_tags,
  • images_ocr = my_ocr,
  • gkg_themes = my_themes,
  • modes = c("WordCloudEnglish", "WordCloudTheme", "WordCloudImageTags", "WordCloudImageWebTags"),
  • timespans = "2 weeks",
  • trelliscope_parameters = list(
  • rows = 1,
    
  • columns = 1,
    
  • path = NULL
    
  • )
  • )
    Error in get_data_ft_v2_api(terms = my_terms, domains = my_domains, images_web_tag = my_image_web, :
    could not find function "get_data_ft_v2_api"

trelliscopeWordcloud
Error: object 'trelliscopeWordcloud' not found

Warnings on load

I'm seeing a lot of warnings when I load the library that I've not seen for any other R packages. Do you know anything about these?

Warning messages:
1: replacing previous import ‘data.table::last’ by ‘dplyr::last’ when loading ‘gdeltr2’
2: replacing previous import ‘data.table::first’ by ‘dplyr::first’ when loading ‘gdeltr2’
3: replacing previous import ‘data.table::between’ by ‘dplyr::between’ when loading ‘gdeltr2’
4: replacing previous import ‘dplyr::collapse’ by ‘glue::collapse’ when loading ‘gdeltr2’
5: replacing previous import ‘curl::handle_reset’ by ‘httr::handle_reset’ when loading ‘gdeltr2’
6: replacing previous import ‘data.table::month’ by ‘lubridate::month’ when loading ‘gdeltr2’
7: replacing previous import ‘data.table::hour’ by ‘lubridate::hour’ when loading ‘gdeltr2’
8: replacing previous import ‘dplyr::intersect’ by ‘lubridate::intersect’ when loading ‘gdeltr2’
9: replacing previous import ‘data.table::quarter’ by ‘lubridate::quarter’ when loading ‘gdeltr2’
10: replacing previous import ‘data.table::week’ by ‘lubridate::week’ when loading ‘gdeltr2’
11: replacing previous import ‘data.table::year’ by ‘lubridate::year’ when loading ‘gdeltr2’
12: replacing previous import ‘dplyr::union’ by ‘lubridate::union’ when loading ‘gdeltr2’
13: replacing previous import ‘data.table::wday’ by ‘lubridate::wday’ when loading ‘gdeltr2’
14: replacing previous import ‘data.table::second’ by ‘lubridate::second’ when loading ‘gdeltr2’
15: replacing previous import ‘dplyr::setdiff’ by ‘lubridate::setdiff’ when loading ‘gdeltr2’
16: replacing previous import ‘data.table::minute’ by ‘lubridate::minute’ when loading ‘gdeltr2’
17: replacing previous import ‘data.table::mday’ by ‘lubridate::mday’ when loading ‘gdeltr2’
18: replacing previous import ‘data.table::yday’ by ‘lubridate::yday’ when loading ‘gdeltr2’
19: replacing previous import ‘data.table::isoweek’ by ‘lubridate::isoweek’ when loading ‘gdeltr2’
20: replacing previous import ‘httr::timeout’ by ‘memoise::timeout’ when loading ‘gdeltr2’
21: replacing previous import ‘jsonlite::flatten’ by ‘purrr::flatten’ when loading ‘gdeltr2’
22: replacing previous import ‘data.table::transpose’ by ‘purrr::transpose’ when loading ‘gdeltr2’
23: replacing previous import ‘curl::parse_date’ by ‘readr::parse_date’ when loading ‘gdeltr2’
24: replacing previous import ‘purrr::list_along’ by ‘rlang::list_along’ when loading ‘gdeltr2’
25: replacing previous import ‘purrr::invoke’ by ‘rlang::invoke’ when loading ‘gdeltr2’
26: replacing previous import ‘purrr::flatten_raw’ by ‘rlang::flatten_raw’ when loading ‘gdeltr2’
27: replacing previous import ‘purrr::modify’ by ‘rlang::modify’ when loading ‘gdeltr2’
28: replacing previous import ‘purrr::as_function’ by ‘rlang::as_function’ when loading ‘gdeltr2’
29: replacing previous import ‘purrr::flatten_dbl’ by ‘rlang::flatten_dbl’ when loading ‘gdeltr2’
30: replacing previous import ‘data.table:::=’ by ‘rlang:::=’ when loading ‘gdeltr2’
31: replacing previous import ‘jsonlite::unbox’ by ‘rlang::unbox’ when loading ‘gdeltr2’
32: replacing previous import ‘purrr::flatten_lgl’ by ‘rlang::flatten_lgl’ when loading ‘gdeltr2’
33: replacing previous import ‘purrr::flatten_int’ by ‘rlang::flatten_int’ when loading ‘gdeltr2’
34: replacing previous import ‘purrr::%@%’ by ‘rlang::%@%’ when loading ‘gdeltr2’
35: replacing previous import ‘purrr::flatten_chr’ by ‘rlang::flatten_chr’ when loading ‘gdeltr2’
36: replacing previous import ‘purrr::splice’ by ‘rlang::splice’ when loading ‘gdeltr2’
37: replacing previous import ‘purrr::flatten’ by ‘rlang::flatten’ when loading ‘gdeltr2’
38: replacing previous import ‘purrr::prepend’ by ‘rlang::prepend’ when loading ‘gdeltr2’
39: replacing previous import ‘readr::guess_encoding’ by ‘rvest::guess_encoding’ when loading ‘gdeltr2’
40: replacing previous import ‘purrr::pluck’ by ‘rvest::pluck’ when loading ‘gdeltr2’
41: replacing previous import ‘urltools::url_parse’ by ‘xml2::url_parse’ when loading ‘gdeltr2’
42: replacing previous import ‘rlang::as_list’ by ‘xml2::as_list’ when loading ‘gdeltr2’
43: replacing previous import ‘rlang::flatten_chr’ by ‘purrr::flatten_chr’ when loading ‘gdeltr2’

small typo

gdeltr2/R/gdelt_event_gkg.R

line 685
mutate(dateData = idDate %>% lubridat::ymd() %>% as.Date()) %>%

should be
mutate(dateData = idDate %>% lubridate::ymd() %>% as.Date()) %>%

lubridat -> lubridate

Invalid char in JSON text

Hey everyone,

well, first of all: Thanks a lot for this amazing package! I just stumbled upon it. I always wanted to access GDELT data, however, I was always intimidated by the sheer size of the project.

I was playing around a bit and tried the blog post that is linked here on github.

Some of the functions mentioned there are not up to date anymore, but I little trial and error got me to running this function:


ft_v2_api(terms = my_terms, domains = my_domains, images_web_tag = my_image_web, 
                   images_tag = my_image_tags, images_ocr = my_ocr, gkg_themes = my_themes, 
                   modes = c("Artlist"), timespans = my_timespan, trelliscope_parameters = my_trelliscope_parameters)

Getting this output:

Sleeping for 6 seconds
Joining, by = "modeSearch"
Removing the following cognostics that are all NA: domainSearch, imageocrSearch, imagewebtagSearch
writing display list [===================================================================>-------------]  83% 5/6 eta: 0sError: lexical error: invalid char in json text.
          ço em Espanha em busca de " parceria estratégica " | NOT�
                     (right here) ------^

I get the idea that probably some escaping might be needed here. However, I don't know where to look for a solution, since documentation is still quite sparse (I don't complain, I'm happy that a package like this exists and that there is work being done on it!)

Any thoughts on my problem?

Best

Patrick

gdeltr2 functions access gdelt v1 vars, not v2 vars

GDELT 2.0 variables are mentioned in several places of the gdeltr2 documentation, but unless I am missing something, none of the v2 variables, that have been added in the past few years, are accessible using gdeltr2. I have used these gdeltr2 functions to access gdelt data, but they return only v1 variables, not v2 variables.

This is true for these gdeltr2 functions:

gkg_tv_days()
get_data_gkg_days_detailed()
ft_v2_api()

Would it be possible to update gdeltr2 so that it can access the full v2 api? As you know, it includes these variables from the codebook.

http://data.gdeltproject.org/documentation/GDELT-Global_Knowledge_Graph_Codebook-V2.pdf

Thank you for gdeltr2

Ken

Package asbviz missing

Hi you posted a few examples using the gdeltr2 package as the one below. Unfortunately I'm not able to find the function asbviz::hc_xy you use. Is this package still available ?

gdeltr2::ft_v2_api(
terms = c("RepRisk", "Sentify", "Thomas", "Aebi"),
modes = c("TimelineVolInfo"),
visualize_results = F,
timespans = "13 days",
source_countries = "US"
) %>%
rename(tone = value) %>%
asbviz::hc_xy(
name = "titleArticle",
x = "datetimeData",
y = "tone",
group = "termSearch",
title = "Media Tone -- Last 13 Days",
subtitle = "data via gdeltr2",
type = "spline",
link = "urlArticle",
use_point_select = T,
color_palette = "pals::kovesi.rainbow_bgyr_35_85_c73",
override_legend_location = NULL,
theme_name = "elementary",
)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.