18 KiB
My Notes
- Work
- Reoccuring Tasks
- Sprint Work
- Sprint Ceremonies
- invalid startDate
- investigate lost trainings
- ARCHIVE
- REVIEW STS-847 iris customize bar
- STS-850 mixpanel tracking
- CI/CD with Stefan
- STS-838 hide bv cats in tools
- Explore reordering with react-dnd
- STS-762 delete cat when classifier creation failed
- STS-831 cat hover style
- STS-834 mentions style
- STS-833 classifier with obsolete sub cat
- Saving Classifiers
[7/7] - Deleting Classifiers
- fix slack alerting for token service in prod
- Other meetings
- fb token service
- Howtos
- Things to pass on
- Other
- Axiom WG
- Private
Work
Reoccuring Tasks
TODO Catch-Up (Emails/Slack)
DEADLINE: <2019-09-27 Fri .+1d>
- State "DONE" from "WAIT" [2019-09-26 Thu 08:48]
- State "DONE" from "TODO" [2019-09-16 Mon 08:50]
:PROPERTIES: :LAST_REPEAT: [2019-09-02 Mon 23:08] :EN
- State "DONE" from "PROJ" [2019-09-02 Mon 23:08]
- State "DONE" from "PROJ" [2019-08-29 Thu 08:54]
- State "DONE" from "PROJ" [2019-08-28 Wed 09:07]
CLOCK: [2019-09-16 Mon 08:19] CLOCK: [2019-08-29 Thu 08:25]–[2019-08-29 Thu 08:55] => 0:30 CLOCK: [2019-08-28 Wed 08:59]–[2019-08-28 Wed 09:05] => 0:06
DEADLINE: <2019-09-27 Fri .+1d>
TODO Code Reviews
DEADLINE: <2019-09-27 Fri .+1d>
- State "DONE" from "WAIT" [2019-09-26 Thu 08:49]
- State "DONE" from "PROJ" [2019-09-02 Mon 23:08]
CLOCK: [2019-08-30 Fri 08:13]–[2019-08-30 Friday 8:25] => 0:12
Sprint Work
Sprint Ceremonies
CLOCK: [2019-08-27 Tue 14:00]–[2019-08-27 Tue 15:49] => 1:49
DONE invalid startDate
CLOSED: [2019-09-26 Thu 08:49] log link
Theory: project timezone is something luxon does not know. Projecttimezone in this project is "US/Central" - though it seems luxon does know that. So maybe it has been something else before. Or this theory is bogus.
Did not see any mixpanel data to show the timezone.
Let's see what Rikki reports back on the timezone and verify then if it could be caused by that.
Consider checking if the given timezone is valid first and use 'utc' as a fallback.
The good thing is that most likely the user will not see a huge difference by looking at the mentions. So it's better to show utc instead of failing for sure.
DONE investigate lost trainings
CLOSED: [2019-10-07 Mon 10:02]
"Goose Island Brand Values", project: Amy, client: Staff - Integration
userId 212083879 projectId 1998259083 queryIds 1999839102 1999916473 classifierId categoryId 6754396 subCategoryIds 6754397 6754398 6754399
First observation, summary call https://app.brandwatch.com/fe-api/projects/1998259083/classifiers/trainings?_=1569391751072
has trainingSet always as null
Turns out that we don't hande when fetching a classifier or categroy fails.
If fetching a classifier fails, the interface actully looks like all training data is lost - BUT IT's actually NOT.
But still not sure if a network issue is the cause of this.
Trying what happens when I stall the requests made
- /projects/162524241/queries/summary
- /projects/162524241/classifiers/trainings/8885308
- /projects/162524241/categories
- /projects/162524241/data/mentions
ARCHIVE ARCHIVE
DONE REVIEW STS-847 iris customize bar
CLOSED: [2019-09-16 Mon 08:43] There are multiple issue about disabling IRIS if it's unsupported but here is a deeper issue about not showing peaks in the sidebar although they do show on the chart after you change dimensions.
Issued a PR to fix showing the peaks.
DONE STS-850 mixpanel tracking
CLOSED: [2019-09-10 Tue 22:06]
CLOCK: [2016-08-30 Fri 12:13]–[2016-08-30 Fri 15:00] => 2:47 CLOCK: [2019-08-29 Thu 08:55]–[2019-08-29 Thu 16:13] => 7:18 CLOCK: [2019-08-28 Wed 16:43]–[2019-08-28 Wed 17:54] => 1:11
By default an event has following data { userId, clientId, clientName }
- create classsifier Event: Classifiers Action: Create Data: projectId,
- edit classifier Event: Classifiers Action: Edit Data: projectId "Parent Category Id",
- closing with X Event: Classifiers Action: Aborted Data: projectId "Parent Category Id" (id or null if new classifier) Type: "Close Icon"
- closing via cancel button Event: Classifiers Action: Aborted Data: projectId "Parent Category Id" (id or null if new classifier) Type: "Cancel Button"
- saving Event: Classifiers Action: Save Data: projectId "Parent Category Id" (id or null if new classifier) "Is Training Enabled" (true|| false) "Classify Historical" (true|| false) "Category Count"
- open category Event: Classifiers Action: Open Category Data: projectId "Parent Category Id" (id or null if new classifier) "Category Id" Type: 'Category Click'
- closing category Event: Classifiers Action: Close Category Data: projectId "Parent Category Id" (id or null if new classifier) "Category Id" "Type" ("Category Click" or "Back Button")
- assigning mentions Event: Classifiers Action: Assign Mentions Data: projectId "Parent Category Id" (id or null if new classifier) "Category Id" "Interaction Type" ("Drag And Drop" or "Actions Dropdown")
- unassign mentions Event: Classifiers Action: Unassign Mentions Data: projectId "Parent Category Id" (id or null if new classifier) "Category Id" "Interaction Type": "Actions Dropdown"
- search Event: Classifiers Action: Search Data: projectId queryIds "Parent Category Id" (id or null if new classifier) "Search Term"
- load more mentions Event: Classifiers Action: Load Mentions Data: projectId queryIds "Parent Category Id" (id or null if new classifier) "Search Term"
DONE CI/CD with Stefan
CLOSED: [2019-09-09 Mon 16:51] SCHEDULED: <2019-09-05 Thu 14:30>
DONE STS-838 hide bv cats in tools
CLOSED: [2019-09-02 Mon 22:38] SCHEDULED: <2019-08-28 Wed>
CLOCK: [2019-09-02 Mon 08:15]–[2019-09-02 Mon 13:59] => 5:44 CLOCK: [2019-08-28 Wed 14:23]–[2019-08-28 Wed 16:43] => 2:20 CLOCK: [2019-08-28 Wed 09:08]–[2019-08-28 Wed 12:13] => 3:05 CLOCK: [2019-08-27 Tue 13:45]–[2019-08-27 Tue 14:00] => 0:15
DONE Explore reordering with react-dnd
CLOSED: [2019-08-27 Tue 08:48]
DONE STS-762 delete cat when classifier creation failed
CLOSED: [2019-08-23 Fri 08:29] When saving a classifier the category is saved before saving the classifier. We need to do this, as we create/edit cactegories via the BW API and create/edit classifiers via the classifiers api.
Couple scenarios we need to consider, when saving the classifier fails:
-
Create a classifier
When creating a classifier and save it, the corresponding category is created first.
Only when that is successful at least one, we need to consider to delete it when the dialog is closed by the user, without a successful save of the classifier.
So when closing the Training interface we need to check
- we are creating a classifer (not editing a classifier)
- category was saved successfully (at least once)
- saving classifier failed always
When all of these conditions are meet we can savely deleted the (unused) category withouth any further notice.
- Edit a classifier If editing a classifier failed, we won't easily be able to revert the changes that were made on the category before hand. For that to work we would need to cache the latest good version, to be able to store that state again. But either way, sub categories might have new different id's if a category was deleted and should be restored, for example. That would force us to update the classifier with the new id's, which will most likely not work either, as saving the classifier failed in the first place. I'm not really convinced this is worth doing and introduces even more complications.
DONE STS-831 cat hover style
CLOSED: [2019-08-23 Fri 08:29]
DONE STS-834 mentions style
CLOSED: [2019-08-23 Fri 08:29]
DONE STS-833 classifier with obsolete sub cat
CLOSED: [2019-08-21 Wed 10:35]
The category 8741525 in project 162518254 has currently 10 sub categories.
The corresponding classifier though has 12 sub categories.
When saving the classifier, the BE is unhappy about the number of categories.
Simply remove a catgory from the training interface or the edit category dialog. After that the classifier will still contain the deleted category ids.
After saving the category, we need to make sure the categorySet only contains sub categoryIds that actually exist in the category
DONE
Saving Classifiers [7/7]
CLOSED: [2019-08-19 Mon 10:19]
DONE implement saving
CLOSED: [2019-07-23 Tue 13:58]
DONE separate classifierId from categoryId in Classification provider
CLOSED: [2019-07-23 Tue 13:58]
When creating, we do save the category first and update the classification provider with the new categoryId
is propagated to the Classification provider as classifierId. Though this approach does not allow us know
if we are in update or create.
So we need to separate classifierId and parentCategoryId again in the provider
DONE hook saving together with CategoryProvider
CLOSED: [2019-07-30 Tue 12:21] Not sure yet how to communicate between the classifier and category context. There are following actions in the create case (i.e. the category does not exist yet)
- Saving is kicked off by calling
CategoryContext#savein Footer - CategoryContext saves the category and receives new categoryIds
- Category updates the global by calling the callback
ReactApp#onCategorySaved - CategoryContext dispatches the action
updateCategoryIdto update the categoryIds in the ClassifierContext - CategoryContext dispatches the action
savingCategorySucceededto update it's own state - When the promise returned from 1. returns we call
ClassifierContext#savein the Footer - When the classifier is saved, ClassifierContext calls
ReactApp#onClassifierSaved onClassifierSavedupdates the classifier global and adds a notification
The problem I'm currently facing is, the ClassifierContext#save created,
does not have yet access to the current state updated with the the
updateCategoryId action, but uses the previous state when being executed.
The reason why this happens is explained in https://reactjs.org/docs/hooks-faq.html#why-am-i-seeing-stale-props-or-state-inside-my-function Basically, when `save` is executed is sees the state from when it was defined and as it has not yet re-rendered, that is a stale state.
One solution is to have the request to save the classifier into the state
and use the useEffect hook and execute the save, when that state set. This
is not very nice, but ensures the save actually uses the latest state, as
it's executed after rendering. It bears the risk, that incorrect state
management will cause save in a loop.
The solution which I went with for now, is to have a state property to indicate saving was requested. In the corresponding provider, we'll have a look for that state prop and execute saving when we see it. This is a somewhat ugly solution, but the saving is executed exactly as intended, maintaining the order and using the latest state.
DONE update global
CLOSED: [2019-07-29 Mon 08:50]
DONE show notification
DONE Review
CLOSED: [2019-08-19 Mon 10:19]
DONE QA
CLOSED: [2019-08-19 Mon 10:19]
DONE Deleting Classifiers
CLOSED: [2019-08-19 Mon 10:19]
DONE fix slack alerting for token service in prod
CLOSED: [2019-08-29 Thu 08:51]
Other meetings
CLOCK: [2019-08-27 Tue 16:00]–[2019-08-27 Tue 17:16] => 1:16
DONE Catchup with Sammy
CLOSED: [2019-09-17 Tue 07:54]
Iris Bug
mixpanel tracking
hide cats in rules and followup bugs
unreproducable training data is lost STS-890
startDate endDate null when getting mentions
LD/module PR needs update - BE is in place
DONE Answer Erik
CLOSED: [2019-09-17 Tue 12:52]
When a user is trying to add a non-Owned Facebook page, and enters a URL for that page, we need to make a call (to the token service?) to get any available user token and use that to get page contents. Does this API endpoint already exist, or do we need to create one? Do we need to be worried about rate limiting? If there are no available user tokens, do we want an error or redirect to the auth token flow? (Mitch/Claudio that last one might be for you)
With "…to get the page contents" you mean to get the Facebook Page name, Icon, Description etc, correct? That's at least what we do in Analytics so far to be able to let the user verify that he picked the right one.
To get the facebook page data you can also directly used the Facebook Graph API using the current short lived user token you currently have (assuming that checking if the user is already authenticated with Facebook OR authenticating with Facebook is done before that step)
As I'm proposing to talk the Graph API instead and using the user token I don't think rate limits will be an issue. There is no such thing of rate limits in the token service nor in BW API (afaik).
Generally speaking, any (long lived) user or page token we have in the backend, we should never hand out to the client as it somewhat breaks they proposed security model Facebook proposes.
- The endpoints to get a list of authenticated pages, the # of remaining hashtags and storing hashtags for a authenticated page do already exist in BW API.
/instagramHashtags/facebookPages?clientId=${clientId}
fb token service
TODO investigate access_token null
INVITE SECRET iPh5foo2ief4uv,i mit line break
https://app.brandwatch.com/fbauth/3f509c305307099e1b58d4a8ce84510b https://app.brandwatch.com/fbauth/3f509c305307099e1b58d4a8ce84510b
DONE fix log level in stackoverflow
CLOSED: [2019-09-24 Tue 18:50]
Deployed to live. The root cause was that Stackdriver does not evaluate level
at all for severtity. You need to set that manually as well /o\.
Howtos
setup mitmproxy
-
write a script to modify the response
from mitmproxy import http def request(flow): if flow.request.pretty_url.find("/classifiers/trainings") >=0 and flow.request.method == 'PUT': flow.response = http.HTTPResponse.make( 500, "<html><body>failed with mitmproxy</body></html>", {"content-type":"text/html"}, )
mitmproxy --mode reverse:https://bwjsonapi.stage.brandwatch.net -p 9999 -s <scriptName>./runInDevelopment --apiUrl=http://localhost:9999
mitmproxy scripts
throttle all request in random order
from threading import Timer
from random import random
from mitmproxy import http
def request(flow):
resumeLater = Timer(random() * 2, flow.resume)
flow.intercept()
resumeLater.start();
jq usage
Get ids from a json response { results: [{id: 4}, ...] }
curl URL | jq '.results[] | .id'
Things to pass on
TODO kitchen duty calendar
TODO slack calendar integrations
TODO dependabot assignment
Other
DONE prepare for Engineering Talk
CLOSED: [2019-09-02 Mon 22:34] DEADLINE: <2019-08-30 Fri> SCHEDULED: <2019-08-27 Tue>
CLOCK: [2019-09-02 Mon 14:00]–[2019-09-02 Mon 16:00] => 2:00
DONE prepare for BrightView Onboarding
CLOSED: [2019-09-10 Tue 21:29] DEADLINE: <2019-09-03 Tue> SCHEDULED: <2019-08-29 Thu>
DONE org-mode lightning talk
CLOSED: [2019-09-26 Thu 08:50]
TODO separate randomly failing unit tests
There are some randomly failing unit tests in the frontend. We currently retry ALL backbone frontend unit tests when they fail.
This slows down CI and does not really help in identifying these.
As a intermedate solution we want to separate these randomly failing unit tests in an own directory to be able to separate them from unit tests that work fine.
This enables us to get rid of retrying all of them when some test fails.
Failing tests
- DashboardToolbarView
- DashboardView
- DataDownloadCollectionView
- DataDownloadFormView
- InsightsCentral_wrapDashboardView
- GuiderView
- InisightsCentral FilterContextMenuView
-
QueryBuilderWriteView
Expected: false Received: "The test \"QueryBuilderWriteView rendering auto fetch of preview results on language change does not trigger fetching when query is new or was not validated (validatedSettings are empty)\" added a new child element to body, please remove it: <div class=\"validation-tip notification-error rounded smallpadding-vertical singlepadding-horizontal\"></div>"Solved most of them by stubbing jquery.showValidation in various places.
Axiom WG
Private
DONE Steuererklärung 2018
CLOSED: [2019-09-26 Thu 08:50]